Test Report: KVM_Linux_crio 17953

                    
                      eb30bbcea83871e91962f38accf20a5558557b42:2024-01-15:32709
                    
                

Test fail (23/310)

Order failed test Duration
39 TestAddons/parallel/Ingress 152.41
53 TestAddons/StoppedEnableDisable 155.51
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 177.8
217 TestMultiNode/serial/PingHostFrom2Pods 3.18
224 TestMultiNode/serial/RestartKeepsNodes 689.96
226 TestMultiNode/serial/StopMultiNode 143.37
233 TestPreload 271.85
293 TestStartStop/group/no-preload/serial/Stop 140.16
294 TestStartStop/group/old-k8s-version/serial/Stop 139.79
297 TestStartStop/group/embed-certs/serial/Stop 139.63
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.5
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.35
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.33
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.26
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.25
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 382.52
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 495.89
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 310.86
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 165.3
x
+
TestAddons/parallel/Ingress (152.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-732359 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-732359 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-732359 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [359ec7f6-2e6a-453f-9838-5987a456e10a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [359ec7f6-2e6a-453f-9838-5987a456e10a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006770373s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-732359 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.487166376s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-732359 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.21
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-732359 addons disable ingress-dns --alsologtostderr -v=1: (1.77704842s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-732359 addons disable ingress --alsologtostderr -v=1: (7.926978074s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-732359 -n addons-732359
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-732359 logs -n 25: (1.317434894s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-479178                                                                     | download-only-479178 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-079711                                                                     | download-only-079711 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-200610                                                                     | download-only-200610 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-479178                                                                     | download-only-479178 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-365521 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | binary-mirror-365521                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38145                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-365521                                                                     | binary-mirror-365521 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| addons  | disable dashboard -p                                                                        | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | addons-732359                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | addons-732359                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-732359 --wait=true                                                                | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-732359 ssh cat                                                                       | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | /opt/local-path-provisioner/pvc-b866449b-b281-439c-be7d-a58afe1f764c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-732359 addons disable                                                                | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | addons-732359                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | -p addons-732359                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-732359 ip                                                                            | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	| addons  | addons-732359 addons disable                                                                | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-732359 addons                                                                        | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | addons-732359                                                                               |                      |         |         |                     |                     |
	| addons  | addons-732359 addons disable                                                                | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | -p addons-732359                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-732359 ssh curl -s                                                                   | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-732359 addons                                                                        | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:30 UTC | 15 Jan 24 09:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-732359 addons                                                                        | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:30 UTC | 15 Jan 24 09:30 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-732359 ip                                                                            | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:32 UTC | 15 Jan 24 09:32 UTC |
	| addons  | addons-732359 addons disable                                                                | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:32 UTC | 15 Jan 24 09:32 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-732359 addons disable                                                                | addons-732359        | jenkins | v1.32.0 | 15 Jan 24 09:32 UTC | 15 Jan 24 09:32 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:26:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:26:52.704431   14218 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:26:52.704573   14218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:52.704585   14218 out.go:309] Setting ErrFile to fd 2...
	I0115 09:26:52.704592   14218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:52.704795   14218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 09:26:52.705388   14218 out.go:303] Setting JSON to false
	I0115 09:26:52.706157   14218 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":513,"bootTime":1705310300,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:26:52.706211   14218 start.go:138] virtualization: kvm guest
	I0115 09:26:52.708335   14218 out.go:177] * [addons-732359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:26:52.709944   14218 notify.go:220] Checking for updates...
	I0115 09:26:52.709968   14218 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:26:52.711425   14218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:26:52.712881   14218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:26:52.714115   14218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:26:52.715480   14218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:26:52.716828   14218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:26:52.718377   14218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:26:52.748849   14218 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 09:26:52.750250   14218 start.go:298] selected driver: kvm2
	I0115 09:26:52.750262   14218 start.go:902] validating driver "kvm2" against <nil>
	I0115 09:26:52.750271   14218 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:26:52.750984   14218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:26:52.751046   14218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 09:26:52.764436   14218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 09:26:52.764512   14218 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:26:52.764737   14218 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 09:26:52.764792   14218 cni.go:84] Creating CNI manager for ""
	I0115 09:26:52.764807   14218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 09:26:52.764820   14218 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 09:26:52.764836   14218 start_flags.go:321] config:
	{Name:addons-732359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-732359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:26:52.764995   14218 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:26:52.766875   14218 out.go:177] * Starting control plane node addons-732359 in cluster addons-732359
	I0115 09:26:52.768176   14218 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:26:52.768217   14218 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 09:26:52.768229   14218 cache.go:56] Caching tarball of preloaded images
	I0115 09:26:52.768297   14218 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 09:26:52.768308   14218 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 09:26:52.768605   14218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/config.json ...
	I0115 09:26:52.768631   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/config.json: {Name:mk57fceae485ad4643c94243f7ecb14cf29b59b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:26:52.768774   14218 start.go:365] acquiring machines lock for addons-732359: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 09:26:52.768831   14218 start.go:369] acquired machines lock for "addons-732359" in 42.106µs
	I0115 09:26:52.768852   14218 start.go:93] Provisioning new machine with config: &{Name:addons-732359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-732359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:26:52.768927   14218 start.go:125] createHost starting for "" (driver="kvm2")
	I0115 09:26:52.770730   14218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0115 09:26:52.770848   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:26:52.770884   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:26:52.783673   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0115 09:26:52.784065   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:26:52.784580   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:26:52.784601   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:26:52.784910   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:26:52.785078   14218 main.go:141] libmachine: (addons-732359) Calling .GetMachineName
	I0115 09:26:52.785252   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:26:52.785368   14218 start.go:159] libmachine.API.Create for "addons-732359" (driver="kvm2")
	I0115 09:26:52.785397   14218 client.go:168] LocalClient.Create starting
	I0115 09:26:52.785432   14218 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem
	I0115 09:26:52.891830   14218 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem
	I0115 09:26:53.052625   14218 main.go:141] libmachine: Running pre-create checks...
	I0115 09:26:53.052649   14218 main.go:141] libmachine: (addons-732359) Calling .PreCreateCheck
	I0115 09:26:53.053176   14218 main.go:141] libmachine: (addons-732359) Calling .GetConfigRaw
	I0115 09:26:53.053619   14218 main.go:141] libmachine: Creating machine...
	I0115 09:26:53.053636   14218 main.go:141] libmachine: (addons-732359) Calling .Create
	I0115 09:26:53.053760   14218 main.go:141] libmachine: (addons-732359) Creating KVM machine...
	I0115 09:26:53.055024   14218 main.go:141] libmachine: (addons-732359) DBG | found existing default KVM network
	I0115 09:26:53.055875   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:53.055714   14239 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112610}
	I0115 09:26:53.061216   14218 main.go:141] libmachine: (addons-732359) DBG | trying to create private KVM network mk-addons-732359 192.168.39.0/24...
	I0115 09:26:53.127129   14218 main.go:141] libmachine: (addons-732359) DBG | private KVM network mk-addons-732359 192.168.39.0/24 created
	I0115 09:26:53.127162   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:53.127120   14239 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:26:53.127176   14218 main.go:141] libmachine: (addons-732359) Setting up store path in /home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359 ...
	I0115 09:26:53.127196   14218 main.go:141] libmachine: (addons-732359) Building disk image from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 09:26:53.127246   14218 main.go:141] libmachine: (addons-732359) Downloading /home/jenkins/minikube-integration/17953-4821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 09:26:53.340282   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:53.340160   14239 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa...
	I0115 09:26:53.398968   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:53.398842   14239 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/addons-732359.rawdisk...
	I0115 09:26:53.399006   14218 main.go:141] libmachine: (addons-732359) DBG | Writing magic tar header
	I0115 09:26:53.399023   14218 main.go:141] libmachine: (addons-732359) DBG | Writing SSH key tar header
	I0115 09:26:53.399034   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:53.398945   14239 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359 ...
	I0115 09:26:53.399059   14218 main.go:141] libmachine: (addons-732359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359
	I0115 09:26:53.399071   14218 main.go:141] libmachine: (addons-732359) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359 (perms=drwx------)
	I0115 09:26:53.399083   14218 main.go:141] libmachine: (addons-732359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines
	I0115 09:26:53.399097   14218 main.go:141] libmachine: (addons-732359) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines (perms=drwxr-xr-x)
	I0115 09:26:53.399114   14218 main.go:141] libmachine: (addons-732359) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube (perms=drwxr-xr-x)
	I0115 09:26:53.399128   14218 main.go:141] libmachine: (addons-732359) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821 (perms=drwxrwxr-x)
	I0115 09:26:53.399137   14218 main.go:141] libmachine: (addons-732359) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 09:26:53.399143   14218 main.go:141] libmachine: (addons-732359) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 09:26:53.399151   14218 main.go:141] libmachine: (addons-732359) Creating domain...
	I0115 09:26:53.399159   14218 main.go:141] libmachine: (addons-732359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:26:53.399171   14218 main.go:141] libmachine: (addons-732359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821
	I0115 09:26:53.399186   14218 main.go:141] libmachine: (addons-732359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 09:26:53.399198   14218 main.go:141] libmachine: (addons-732359) DBG | Checking permissions on dir: /home/jenkins
	I0115 09:26:53.399207   14218 main.go:141] libmachine: (addons-732359) DBG | Checking permissions on dir: /home
	I0115 09:26:53.399219   14218 main.go:141] libmachine: (addons-732359) DBG | Skipping /home - not owner
	I0115 09:26:53.400117   14218 main.go:141] libmachine: (addons-732359) define libvirt domain using xml: 
	I0115 09:26:53.400143   14218 main.go:141] libmachine: (addons-732359) <domain type='kvm'>
	I0115 09:26:53.400155   14218 main.go:141] libmachine: (addons-732359)   <name>addons-732359</name>
	I0115 09:26:53.400164   14218 main.go:141] libmachine: (addons-732359)   <memory unit='MiB'>4000</memory>
	I0115 09:26:53.400174   14218 main.go:141] libmachine: (addons-732359)   <vcpu>2</vcpu>
	I0115 09:26:53.400183   14218 main.go:141] libmachine: (addons-732359)   <features>
	I0115 09:26:53.400194   14218 main.go:141] libmachine: (addons-732359)     <acpi/>
	I0115 09:26:53.400205   14218 main.go:141] libmachine: (addons-732359)     <apic/>
	I0115 09:26:53.400216   14218 main.go:141] libmachine: (addons-732359)     <pae/>
	I0115 09:26:53.400232   14218 main.go:141] libmachine: (addons-732359)     
	I0115 09:26:53.400246   14218 main.go:141] libmachine: (addons-732359)   </features>
	I0115 09:26:53.400271   14218 main.go:141] libmachine: (addons-732359)   <cpu mode='host-passthrough'>
	I0115 09:26:53.400288   14218 main.go:141] libmachine: (addons-732359)   
	I0115 09:26:53.400298   14218 main.go:141] libmachine: (addons-732359)   </cpu>
	I0115 09:26:53.400343   14218 main.go:141] libmachine: (addons-732359)   <os>
	I0115 09:26:53.400360   14218 main.go:141] libmachine: (addons-732359)     <type>hvm</type>
	I0115 09:26:53.400376   14218 main.go:141] libmachine: (addons-732359)     <boot dev='cdrom'/>
	I0115 09:26:53.400446   14218 main.go:141] libmachine: (addons-732359)     <boot dev='hd'/>
	I0115 09:26:53.400476   14218 main.go:141] libmachine: (addons-732359)     <bootmenu enable='no'/>
	I0115 09:26:53.400489   14218 main.go:141] libmachine: (addons-732359)   </os>
	I0115 09:26:53.400501   14218 main.go:141] libmachine: (addons-732359)   <devices>
	I0115 09:26:53.400512   14218 main.go:141] libmachine: (addons-732359)     <disk type='file' device='cdrom'>
	I0115 09:26:53.400526   14218 main.go:141] libmachine: (addons-732359)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/boot2docker.iso'/>
	I0115 09:26:53.400542   14218 main.go:141] libmachine: (addons-732359)       <target dev='hdc' bus='scsi'/>
	I0115 09:26:53.400558   14218 main.go:141] libmachine: (addons-732359)       <readonly/>
	I0115 09:26:53.400572   14218 main.go:141] libmachine: (addons-732359)     </disk>
	I0115 09:26:53.400585   14218 main.go:141] libmachine: (addons-732359)     <disk type='file' device='disk'>
	I0115 09:26:53.400602   14218 main.go:141] libmachine: (addons-732359)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 09:26:53.400618   14218 main.go:141] libmachine: (addons-732359)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/addons-732359.rawdisk'/>
	I0115 09:26:53.400646   14218 main.go:141] libmachine: (addons-732359)       <target dev='hda' bus='virtio'/>
	I0115 09:26:53.400670   14218 main.go:141] libmachine: (addons-732359)     </disk>
	I0115 09:26:53.400686   14218 main.go:141] libmachine: (addons-732359)     <interface type='network'>
	I0115 09:26:53.400700   14218 main.go:141] libmachine: (addons-732359)       <source network='mk-addons-732359'/>
	I0115 09:26:53.400714   14218 main.go:141] libmachine: (addons-732359)       <model type='virtio'/>
	I0115 09:26:53.400726   14218 main.go:141] libmachine: (addons-732359)     </interface>
	I0115 09:26:53.400739   14218 main.go:141] libmachine: (addons-732359)     <interface type='network'>
	I0115 09:26:53.400758   14218 main.go:141] libmachine: (addons-732359)       <source network='default'/>
	I0115 09:26:53.400771   14218 main.go:141] libmachine: (addons-732359)       <model type='virtio'/>
	I0115 09:26:53.400784   14218 main.go:141] libmachine: (addons-732359)     </interface>
	I0115 09:26:53.400795   14218 main.go:141] libmachine: (addons-732359)     <serial type='pty'>
	I0115 09:26:53.400812   14218 main.go:141] libmachine: (addons-732359)       <target port='0'/>
	I0115 09:26:53.400822   14218 main.go:141] libmachine: (addons-732359)     </serial>
	I0115 09:26:53.400835   14218 main.go:141] libmachine: (addons-732359)     <console type='pty'>
	I0115 09:26:53.400850   14218 main.go:141] libmachine: (addons-732359)       <target type='serial' port='0'/>
	I0115 09:26:53.400859   14218 main.go:141] libmachine: (addons-732359)     </console>
	I0115 09:26:53.400870   14218 main.go:141] libmachine: (addons-732359)     <rng model='virtio'>
	I0115 09:26:53.400887   14218 main.go:141] libmachine: (addons-732359)       <backend model='random'>/dev/random</backend>
	I0115 09:26:53.400899   14218 main.go:141] libmachine: (addons-732359)     </rng>
	I0115 09:26:53.400911   14218 main.go:141] libmachine: (addons-732359)     
	I0115 09:26:53.400925   14218 main.go:141] libmachine: (addons-732359)     
	I0115 09:26:53.400939   14218 main.go:141] libmachine: (addons-732359)   </devices>
	I0115 09:26:53.400952   14218 main.go:141] libmachine: (addons-732359) </domain>
	I0115 09:26:53.400964   14218 main.go:141] libmachine: (addons-732359) 
	I0115 09:26:53.406159   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:96:b5:4c in network default
	I0115 09:26:53.406743   14218 main.go:141] libmachine: (addons-732359) Ensuring networks are active...
	I0115 09:26:53.406769   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:53.407274   14218 main.go:141] libmachine: (addons-732359) Ensuring network default is active
	I0115 09:26:53.407545   14218 main.go:141] libmachine: (addons-732359) Ensuring network mk-addons-732359 is active
	I0115 09:26:53.408009   14218 main.go:141] libmachine: (addons-732359) Getting domain xml...
	I0115 09:26:53.408616   14218 main.go:141] libmachine: (addons-732359) Creating domain...
	I0115 09:26:54.747544   14218 main.go:141] libmachine: (addons-732359) Waiting to get IP...
	I0115 09:26:54.748448   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:54.748816   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:26:54.748865   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:54.748812   14239 retry.go:31] will retry after 293.932694ms: waiting for machine to come up
	I0115 09:26:55.044414   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:55.044727   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:26:55.044758   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:55.044686   14239 retry.go:31] will retry after 301.556998ms: waiting for machine to come up
	I0115 09:26:55.348132   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:55.348507   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:26:55.348541   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:55.348463   14239 retry.go:31] will retry after 451.612857ms: waiting for machine to come up
	I0115 09:26:55.802124   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:55.802479   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:26:55.802502   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:55.802435   14239 retry.go:31] will retry after 396.549417ms: waiting for machine to come up
	I0115 09:26:56.201076   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:56.201454   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:26:56.201482   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:56.201411   14239 retry.go:31] will retry after 595.829528ms: waiting for machine to come up
	I0115 09:26:56.799153   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:56.799489   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:26:56.799520   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:56.799430   14239 retry.go:31] will retry after 813.62465ms: waiting for machine to come up
	I0115 09:26:57.614440   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:57.614829   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:26:57.614861   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:57.614787   14239 retry.go:31] will retry after 990.988281ms: waiting for machine to come up
	I0115 09:26:58.607448   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:58.607897   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:26:58.607926   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:58.607854   14239 retry.go:31] will retry after 1.313547445s: waiting for machine to come up
	I0115 09:26:59.923422   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:26:59.923767   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:26:59.923805   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:26:59.923733   14239 retry.go:31] will retry after 1.416296464s: waiting for machine to come up
	I0115 09:27:01.341222   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:01.341601   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:27:01.341630   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:27:01.341559   14239 retry.go:31] will retry after 1.501682327s: waiting for machine to come up
	I0115 09:27:02.844360   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:02.844855   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:27:02.844886   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:27:02.844797   14239 retry.go:31] will retry after 2.909853992s: waiting for machine to come up
	I0115 09:27:05.757911   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:05.758318   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:27:05.758350   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:27:05.758265   14239 retry.go:31] will retry after 2.813488264s: waiting for machine to come up
	I0115 09:27:08.573623   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:08.573863   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:27:08.573890   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:27:08.573830   14239 retry.go:31] will retry after 2.846656731s: waiting for machine to come up
	I0115 09:27:11.423610   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:11.424022   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find current IP address of domain addons-732359 in network mk-addons-732359
	I0115 09:27:11.424045   14218 main.go:141] libmachine: (addons-732359) DBG | I0115 09:27:11.423987   14239 retry.go:31] will retry after 4.62759225s: waiting for machine to come up
	I0115 09:27:16.055344   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.055760   14218 main.go:141] libmachine: (addons-732359) Found IP for machine: 192.168.39.21
	I0115 09:27:16.055792   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has current primary IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.055803   14218 main.go:141] libmachine: (addons-732359) Reserving static IP address...
	I0115 09:27:16.056085   14218 main.go:141] libmachine: (addons-732359) DBG | unable to find host DHCP lease matching {name: "addons-732359", mac: "52:54:00:77:91:7c", ip: "192.168.39.21"} in network mk-addons-732359
	I0115 09:27:16.125044   14218 main.go:141] libmachine: (addons-732359) DBG | Getting to WaitForSSH function...
	I0115 09:27:16.125082   14218 main.go:141] libmachine: (addons-732359) Reserved static IP address: 192.168.39.21
	I0115 09:27:16.125097   14218 main.go:141] libmachine: (addons-732359) Waiting for SSH to be available...
	I0115 09:27:16.128136   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.128526   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:minikube Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:16.128543   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.128721   14218 main.go:141] libmachine: (addons-732359) DBG | Using SSH client type: external
	I0115 09:27:16.128749   14218 main.go:141] libmachine: (addons-732359) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa (-rw-------)
	I0115 09:27:16.128778   14218 main.go:141] libmachine: (addons-732359) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 09:27:16.128792   14218 main.go:141] libmachine: (addons-732359) DBG | About to run SSH command:
	I0115 09:27:16.128813   14218 main.go:141] libmachine: (addons-732359) DBG | exit 0
	I0115 09:27:16.230169   14218 main.go:141] libmachine: (addons-732359) DBG | SSH cmd err, output: <nil>: 
	I0115 09:27:16.230389   14218 main.go:141] libmachine: (addons-732359) KVM machine creation complete!
	I0115 09:27:16.230705   14218 main.go:141] libmachine: (addons-732359) Calling .GetConfigRaw
	I0115 09:27:16.231213   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:16.231380   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:16.231544   14218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 09:27:16.231558   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:16.232748   14218 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 09:27:16.232763   14218 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 09:27:16.232769   14218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 09:27:16.232776   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:16.234863   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.235180   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:16.235208   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.235314   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:16.235458   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:16.235626   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:16.235757   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:16.235908   14218 main.go:141] libmachine: Using SSH client type: native
	I0115 09:27:16.236242   14218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0115 09:27:16.236255   14218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 09:27:16.361563   14218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:27:16.361582   14218 main.go:141] libmachine: Detecting the provisioner...
	I0115 09:27:16.361590   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:16.364351   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.364614   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:16.364654   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.364816   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:16.365016   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:16.365188   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:16.365365   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:16.365559   14218 main.go:141] libmachine: Using SSH client type: native
	I0115 09:27:16.365879   14218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0115 09:27:16.365892   14218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 09:27:16.494966   14218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 09:27:16.495054   14218 main.go:141] libmachine: found compatible host: buildroot
	I0115 09:27:16.495068   14218 main.go:141] libmachine: Provisioning with buildroot...
	I0115 09:27:16.495080   14218 main.go:141] libmachine: (addons-732359) Calling .GetMachineName
	I0115 09:27:16.495356   14218 buildroot.go:166] provisioning hostname "addons-732359"
	I0115 09:27:16.495378   14218 main.go:141] libmachine: (addons-732359) Calling .GetMachineName
	I0115 09:27:16.495561   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:16.497994   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.498331   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:16.498358   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.498501   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:16.498689   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:16.498837   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:16.499037   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:16.499180   14218 main.go:141] libmachine: Using SSH client type: native
	I0115 09:27:16.499565   14218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0115 09:27:16.499580   14218 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-732359 && echo "addons-732359" | sudo tee /etc/hostname
	I0115 09:27:16.642571   14218 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-732359
	
	I0115 09:27:16.642607   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:16.645044   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.645480   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:16.645509   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.645752   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:16.645958   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:16.646130   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:16.646297   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:16.646481   14218 main.go:141] libmachine: Using SSH client type: native
	I0115 09:27:16.646858   14218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0115 09:27:16.646877   14218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-732359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-732359/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-732359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 09:27:16.782033   14218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:27:16.782067   14218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 09:27:16.782091   14218 buildroot.go:174] setting up certificates
	I0115 09:27:16.782099   14218 provision.go:83] configureAuth start
	I0115 09:27:16.782107   14218 main.go:141] libmachine: (addons-732359) Calling .GetMachineName
	I0115 09:27:16.782398   14218 main.go:141] libmachine: (addons-732359) Calling .GetIP
	I0115 09:27:16.784769   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.785082   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:16.785122   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.785233   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:16.787434   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.787785   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:16.787815   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:16.787969   14218 provision.go:138] copyHostCerts
	I0115 09:27:16.788039   14218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 09:27:16.788206   14218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 09:27:16.788314   14218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 09:27:16.788393   14218 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.addons-732359 san=[192.168.39.21 192.168.39.21 localhost 127.0.0.1 minikube addons-732359]
	I0115 09:27:17.065688   14218 provision.go:172] copyRemoteCerts
	I0115 09:27:17.065749   14218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 09:27:17.065776   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:17.068397   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.068696   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:17.068724   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.068902   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:17.069142   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:17.069282   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:17.069478   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:17.164264   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 09:27:17.186074   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0115 09:27:17.207557   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 09:27:17.228658   14218 provision.go:86] duration metric: configureAuth took 446.546212ms
	I0115 09:27:17.228687   14218 buildroot.go:189] setting minikube options for container-runtime
	I0115 09:27:17.228866   14218 config.go:182] Loaded profile config "addons-732359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:27:17.228950   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:17.232718   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.233112   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:17.233142   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.233299   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:17.233479   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:17.233637   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:17.233752   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:17.233904   14218 main.go:141] libmachine: Using SSH client type: native
	I0115 09:27:17.234368   14218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0115 09:27:17.234393   14218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 09:27:17.583618   14218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 09:27:17.583647   14218 main.go:141] libmachine: Checking connection to Docker...
	I0115 09:27:17.583672   14218 main.go:141] libmachine: (addons-732359) Calling .GetURL
	I0115 09:27:17.584816   14218 main.go:141] libmachine: (addons-732359) DBG | Using libvirt version 6000000
	I0115 09:27:17.587112   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.587447   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:17.587485   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.587655   14218 main.go:141] libmachine: Docker is up and running!
	I0115 09:27:17.587676   14218 main.go:141] libmachine: Reticulating splines...
	I0115 09:27:17.587684   14218 client.go:171] LocalClient.Create took 24.802279491s
	I0115 09:27:17.587708   14218 start.go:167] duration metric: libmachine.API.Create for "addons-732359" took 24.80234053s
	I0115 09:27:17.587731   14218 start.go:300] post-start starting for "addons-732359" (driver="kvm2")
	I0115 09:27:17.587747   14218 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 09:27:17.587768   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:17.588031   14218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 09:27:17.588071   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:17.590291   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.590617   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:17.590644   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.590787   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:17.590953   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:17.591129   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:17.591275   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:17.683452   14218 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 09:27:17.687723   14218 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 09:27:17.687746   14218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 09:27:17.687802   14218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 09:27:17.687829   14218 start.go:303] post-start completed in 100.088638ms
	I0115 09:27:17.687857   14218 main.go:141] libmachine: (addons-732359) Calling .GetConfigRaw
	I0115 09:27:17.688410   14218 main.go:141] libmachine: (addons-732359) Calling .GetIP
	I0115 09:27:17.691301   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.691656   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:17.691676   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.691940   14218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/config.json ...
	I0115 09:27:17.692116   14218 start.go:128] duration metric: createHost completed in 24.923179413s
	I0115 09:27:17.692139   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:17.694126   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.694455   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:17.694484   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.694593   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:17.694759   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:17.694895   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:17.694982   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:17.695157   14218 main.go:141] libmachine: Using SSH client type: native
	I0115 09:27:17.695523   14218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0115 09:27:17.695537   14218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 09:27:17.822991   14218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705310837.803680981
	
	I0115 09:27:17.823017   14218 fix.go:206] guest clock: 1705310837.803680981
	I0115 09:27:17.823028   14218 fix.go:219] Guest: 2024-01-15 09:27:17.803680981 +0000 UTC Remote: 2024-01-15 09:27:17.692126491 +0000 UTC m=+25.034387313 (delta=111.55449ms)
	I0115 09:27:17.823052   14218 fix.go:190] guest clock delta is within tolerance: 111.55449ms
	I0115 09:27:17.823059   14218 start.go:83] releasing machines lock for "addons-732359", held for 25.05421742s
	I0115 09:27:17.823088   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:17.823355   14218 main.go:141] libmachine: (addons-732359) Calling .GetIP
	I0115 09:27:17.826096   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.826455   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:17.826485   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.826653   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:17.827066   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:17.827233   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:17.827316   14218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 09:27:17.827358   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:17.827448   14218 ssh_runner.go:195] Run: cat /version.json
	I0115 09:27:17.827473   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:17.830026   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.830067   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.830355   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:17.830380   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.830470   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:17.830471   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:17.830512   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:17.830616   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:17.830740   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:17.830744   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:17.830928   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:17.830944   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:17.831101   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:17.831217   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:17.946355   14218 ssh_runner.go:195] Run: systemctl --version
	I0115 09:27:17.952146   14218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 09:27:18.109879   14218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 09:27:18.116752   14218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 09:27:18.116820   14218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:27:18.132185   14218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 09:27:18.132203   14218 start.go:475] detecting cgroup driver to use...
	I0115 09:27:18.132273   14218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 09:27:18.148820   14218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 09:27:18.163653   14218 docker.go:217] disabling cri-docker service (if available) ...
	I0115 09:27:18.163699   14218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 09:27:18.178293   14218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 09:27:18.192731   14218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 09:27:18.314983   14218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 09:27:18.440327   14218 docker.go:233] disabling docker service ...
	I0115 09:27:18.440389   14218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 09:27:18.453627   14218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 09:27:18.465054   14218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 09:27:18.571498   14218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 09:27:18.678558   14218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 09:27:18.690311   14218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 09:27:18.706483   14218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 09:27:18.706545   14218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:27:18.715654   14218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 09:27:18.715731   14218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:27:18.724747   14218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:27:18.733625   14218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:27:18.742593   14218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 09:27:18.751815   14218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 09:27:18.760065   14218 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 09:27:18.760120   14218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 09:27:18.772544   14218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 09:27:18.780297   14218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 09:27:18.874603   14218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 09:27:19.038411   14218 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 09:27:19.038516   14218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 09:27:19.043054   14218 start.go:543] Will wait 60s for crictl version
	I0115 09:27:19.043110   14218 ssh_runner.go:195] Run: which crictl
	I0115 09:27:19.046444   14218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 09:27:19.086009   14218 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 09:27:19.086122   14218 ssh_runner.go:195] Run: crio --version
	I0115 09:27:19.136949   14218 ssh_runner.go:195] Run: crio --version
	I0115 09:27:19.187477   14218 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 09:27:19.188990   14218 main.go:141] libmachine: (addons-732359) Calling .GetIP
	I0115 09:27:19.191493   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:19.191773   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:19.191799   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:19.192012   14218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 09:27:19.195833   14218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:27:19.206873   14218 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:27:19.206933   14218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:27:19.241779   14218 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 09:27:19.241844   14218 ssh_runner.go:195] Run: which lz4
	I0115 09:27:19.245357   14218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 09:27:19.249131   14218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 09:27:19.249154   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 09:27:20.965133   14218 crio.go:444] Took 1.719804 seconds to copy over tarball
	I0115 09:27:20.965212   14218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 09:27:23.979783   14218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.014545671s)
	I0115 09:27:23.979812   14218 crio.go:451] Took 3.014649 seconds to extract the tarball
	I0115 09:27:23.979824   14218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 09:27:24.020178   14218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:27:24.085126   14218 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 09:27:24.085156   14218 cache_images.go:84] Images are preloaded, skipping loading
	I0115 09:27:24.085235   14218 ssh_runner.go:195] Run: crio config
	I0115 09:27:24.150739   14218 cni.go:84] Creating CNI manager for ""
	I0115 09:27:24.150759   14218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 09:27:24.150777   14218 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 09:27:24.150793   14218 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-732359 NodeName:addons-732359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 09:27:24.150917   14218 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-732359"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 09:27:24.150985   14218 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-732359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-732359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 09:27:24.151031   14218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 09:27:24.160731   14218 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 09:27:24.160798   14218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 09:27:24.169568   14218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0115 09:27:24.184422   14218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 09:27:24.198848   14218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0115 09:27:24.213395   14218 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I0115 09:27:24.216765   14218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:27:24.227737   14218 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359 for IP: 192.168.39.21
	I0115 09:27:24.227766   14218 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:24.227901   14218 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 09:27:24.315401   14218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt ...
	I0115 09:27:24.315429   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt: {Name:mk6ea3652e4c623a17385179444eccb646d56863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:24.315596   14218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key ...
	I0115 09:27:24.315614   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key: {Name:mkb670773521f76a83555b170b42982a1855fcbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:24.315707   14218 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 09:27:24.377021   14218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt ...
	I0115 09:27:24.377047   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt: {Name:mk2f408965d728e617d0bddcd050c42ac874cd95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:24.377214   14218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key ...
	I0115 09:27:24.377234   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key: {Name:mk9d44de7ecae80a1191eb1e630c48a76f376ac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:24.377358   14218 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.key
	I0115 09:27:24.377376   14218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt with IP's: []
	I0115 09:27:24.664170   14218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt ...
	I0115 09:27:24.664204   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: {Name:mk16173a0d6737ddf751e4cb3ad9f5c6f5da51f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:24.664370   14218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.key ...
	I0115 09:27:24.664381   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.key: {Name:mkccebbe474dc3de74ad95ef51cfcb8396b35d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:24.664446   14218 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.key.86be2464
	I0115 09:27:24.664462   14218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.crt.86be2464 with IP's: [192.168.39.21 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 09:27:24.822622   14218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.crt.86be2464 ...
	I0115 09:27:24.822653   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.crt.86be2464: {Name:mkc50a324b13150d8865b07a137b0731c95874b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:24.822798   14218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.key.86be2464 ...
	I0115 09:27:24.822810   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.key.86be2464: {Name:mk9697d452ba9ca31d25acc7a9e0c1d20364942b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:24.822870   14218 certs.go:337] copying /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.crt.86be2464 -> /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.crt
	I0115 09:27:24.822930   14218 certs.go:341] copying /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.key.86be2464 -> /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.key
	I0115 09:27:24.822975   14218 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/proxy-client.key
	I0115 09:27:24.822990   14218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/proxy-client.crt with IP's: []
	I0115 09:27:25.177424   14218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/proxy-client.crt ...
	I0115 09:27:25.177452   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/proxy-client.crt: {Name:mk5bf8271b456397c94a41f1ef3c2d60b26814be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:25.177601   14218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/proxy-client.key ...
	I0115 09:27:25.177611   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/proxy-client.key: {Name:mk4c3a5fd598f77f4ab00b522b3961e5b9bc9012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:25.177761   14218 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 09:27:25.177793   14218 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 09:27:25.177816   14218 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 09:27:25.177839   14218 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 09:27:25.178378   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 09:27:25.201757   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 09:27:25.222716   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 09:27:25.244560   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 09:27:25.265730   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 09:27:25.286638   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 09:27:25.307586   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 09:27:25.328263   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 09:27:25.349934   14218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 09:27:25.370886   14218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 09:27:25.385989   14218 ssh_runner.go:195] Run: openssl version
	I0115 09:27:25.391043   14218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 09:27:25.400846   14218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:27:25.404926   14218 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:27:25.404968   14218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:27:25.409984   14218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 09:27:25.419908   14218 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 09:27:25.423445   14218 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:27:25.423484   14218 kubeadm.go:404] StartCluster: {Name:addons-732359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-732359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:27:25.423557   14218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 09:27:25.423593   14218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 09:27:25.459146   14218 cri.go:89] found id: ""
	I0115 09:27:25.459213   14218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 09:27:25.468575   14218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 09:27:25.477587   14218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 09:27:25.486558   14218 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 09:27:25.486596   14218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0115 09:27:25.541862   14218 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 09:27:25.541985   14218 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 09:27:25.672335   14218 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 09:27:25.672501   14218 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 09:27:25.672634   14218 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 09:27:25.908097   14218 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 09:27:25.960734   14218 out.go:204]   - Generating certificates and keys ...
	I0115 09:27:25.960854   14218 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 09:27:25.960958   14218 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 09:27:26.145022   14218 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 09:27:26.241961   14218 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 09:27:26.353625   14218 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 09:27:26.481854   14218 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 09:27:26.763591   14218 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 09:27:26.763780   14218 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-732359 localhost] and IPs [192.168.39.21 127.0.0.1 ::1]
	I0115 09:27:26.873194   14218 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 09:27:26.873497   14218 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-732359 localhost] and IPs [192.168.39.21 127.0.0.1 ::1]
	I0115 09:27:27.023843   14218 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 09:27:27.228855   14218 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 09:27:27.422409   14218 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 09:27:27.422657   14218 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 09:27:27.502344   14218 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 09:27:27.986816   14218 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 09:27:28.228137   14218 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 09:27:28.293701   14218 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 09:27:28.294225   14218 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 09:27:28.296571   14218 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 09:27:28.412189   14218 out.go:204]   - Booting up control plane ...
	I0115 09:27:28.412337   14218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 09:27:28.412427   14218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 09:27:28.412514   14218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 09:27:28.412660   14218 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:27:28.412761   14218 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:27:28.412803   14218 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 09:27:28.441881   14218 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 09:27:35.942041   14218 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502156 seconds
	I0115 09:27:35.942192   14218 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 09:27:35.961065   14218 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 09:27:36.492437   14218 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 09:27:36.492686   14218 kubeadm.go:322] [mark-control-plane] Marking the node addons-732359 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 09:27:37.008251   14218 kubeadm.go:322] [bootstrap-token] Using token: 6ffv3n.baafi72yq710vpdf
	I0115 09:27:37.009736   14218 out.go:204]   - Configuring RBAC rules ...
	I0115 09:27:37.009866   14218 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 09:27:37.014607   14218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 09:27:37.025520   14218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 09:27:37.029892   14218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 09:27:37.034158   14218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 09:27:37.042551   14218 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 09:27:37.059431   14218 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 09:27:37.321489   14218 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 09:27:37.431123   14218 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 09:27:37.432033   14218 kubeadm.go:322] 
	I0115 09:27:37.432119   14218 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 09:27:37.432132   14218 kubeadm.go:322] 
	I0115 09:27:37.432223   14218 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 09:27:37.432235   14218 kubeadm.go:322] 
	I0115 09:27:37.432266   14218 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 09:27:37.432335   14218 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 09:27:37.432403   14218 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 09:27:37.432412   14218 kubeadm.go:322] 
	I0115 09:27:37.432485   14218 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 09:27:37.432494   14218 kubeadm.go:322] 
	I0115 09:27:37.432565   14218 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 09:27:37.432576   14218 kubeadm.go:322] 
	I0115 09:27:37.432637   14218 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 09:27:37.432751   14218 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 09:27:37.432859   14218 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 09:27:37.432889   14218 kubeadm.go:322] 
	I0115 09:27:37.433006   14218 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 09:27:37.433111   14218 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 09:27:37.433122   14218 kubeadm.go:322] 
	I0115 09:27:37.433223   14218 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6ffv3n.baafi72yq710vpdf \
	I0115 09:27:37.433350   14218 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 \
	I0115 09:27:37.433405   14218 kubeadm.go:322] 	--control-plane 
	I0115 09:27:37.433422   14218 kubeadm.go:322] 
	I0115 09:27:37.433530   14218 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 09:27:37.433548   14218 kubeadm.go:322] 
	I0115 09:27:37.433679   14218 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6ffv3n.baafi72yq710vpdf \
	I0115 09:27:37.433828   14218 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 09:27:37.435011   14218 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:27:37.435032   14218 cni.go:84] Creating CNI manager for ""
	I0115 09:27:37.435039   14218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 09:27:37.437755   14218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 09:27:37.439143   14218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 09:27:37.488102   14218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 09:27:37.562614   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:37.562653   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=addons-732359 minikube.k8s.io/updated_at=2024_01_15T09_27_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:37.562612   14218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 09:27:37.632850   14218 ops.go:34] apiserver oom_adj: -16
	I0115 09:27:37.777085   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:38.277579   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:38.778116   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:39.277402   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:39.778036   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:40.277653   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:40.777813   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:41.277640   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:41.777690   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:42.277220   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:42.777803   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:43.277367   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:43.777673   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:44.277809   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:44.777929   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:45.277985   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:45.778014   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:46.277397   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:46.777172   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:47.278134   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:47.777846   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:48.277084   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:48.777293   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:49.277180   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:49.778158   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:50.277773   14218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:50.393079   14218 kubeadm.go:1088] duration metric: took 12.830508829s to wait for elevateKubeSystemPrivileges.
	I0115 09:27:50.393114   14218 kubeadm.go:406] StartCluster complete in 24.969631525s
	I0115 09:27:50.393135   14218 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:50.393268   14218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:27:50.393772   14218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:50.394010   14218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 09:27:50.394029   14218 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0115 09:27:50.394130   14218 addons.go:69] Setting default-storageclass=true in profile "addons-732359"
	I0115 09:27:50.394159   14218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-732359"
	I0115 09:27:50.394162   14218 addons.go:69] Setting cloud-spanner=true in profile "addons-732359"
	I0115 09:27:50.394178   14218 addons.go:69] Setting inspektor-gadget=true in profile "addons-732359"
	I0115 09:27:50.394171   14218 addons.go:69] Setting ingress-dns=true in profile "addons-732359"
	I0115 09:27:50.394194   14218 addons.go:234] Setting addon ingress-dns=true in "addons-732359"
	I0115 09:27:50.394197   14218 addons.go:234] Setting addon cloud-spanner=true in "addons-732359"
	I0115 09:27:50.394202   14218 addons.go:69] Setting storage-provisioner=true in profile "addons-732359"
	I0115 09:27:50.394213   14218 addons.go:234] Setting addon storage-provisioner=true in "addons-732359"
	I0115 09:27:50.394208   14218 addons.go:69] Setting registry=true in profile "addons-732359"
	I0115 09:27:50.394228   14218 addons.go:234] Setting addon registry=true in "addons-732359"
	I0115 09:27:50.394242   14218 config.go:182] Loaded profile config "addons-732359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:27:50.394257   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.394261   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.394263   14218 addons.go:69] Setting gcp-auth=true in profile "addons-732359"
	I0115 09:27:50.394279   14218 mustload.go:65] Loading cluster: addons-732359
	I0115 09:27:50.394279   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.394302   14218 addons.go:69] Setting helm-tiller=true in profile "addons-732359"
	I0115 09:27:50.394315   14218 addons.go:234] Setting addon helm-tiller=true in "addons-732359"
	I0115 09:27:50.394349   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.394476   14218 config.go:182] Loaded profile config "addons-732359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:27:50.394476   14218 addons.go:69] Setting ingress=true in profile "addons-732359"
	I0115 09:27:50.394508   14218 addons.go:234] Setting addon ingress=true in "addons-732359"
	I0115 09:27:50.394575   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.394687   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.394704   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.394719   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.394728   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.394737   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.394752   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.394166   14218 addons.go:69] Setting yakd=true in profile "addons-732359"
	I0115 09:27:50.394774   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.394257   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.394789   14218 addons.go:234] Setting addon yakd=true in "addons-732359"
	I0115 09:27:50.394792   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.394798   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.394802   14218 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-732359"
	I0115 09:27:50.394814   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.394815   14218 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-732359"
	I0115 09:27:50.394778   14218 addons.go:69] Setting metrics-server=true in profile "addons-732359"
	I0115 09:27:50.394834   14218 addons.go:234] Setting addon metrics-server=true in "addons-732359"
	I0115 09:27:50.394194   14218 addons.go:234] Setting addon inspektor-gadget=true in "addons-732359"
	I0115 09:27:50.394843   14218 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-732359"
	I0115 09:27:50.394853   14218 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-732359"
	I0115 09:27:50.394864   14218 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-732359"
	I0115 09:27:50.394879   14218 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-732359"
	I0115 09:27:50.394914   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.394934   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.394966   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.394984   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.395010   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.395082   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.395088   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.395105   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.395193   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.395208   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.394844   14218 addons.go:69] Setting volumesnapshots=true in profile "addons-732359"
	I0115 09:27:50.395267   14218 addons.go:234] Setting addon volumesnapshots=true in "addons-732359"
	I0115 09:27:50.395280   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.395303   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.395396   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.395429   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.395466   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.395586   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.395603   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.395609   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.395637   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.395660   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.395678   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.395683   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.395982   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.396020   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.410829   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0115 09:27:50.413376   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I0115 09:27:50.413899   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0115 09:27:50.419012   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.419051   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.423535   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.423655   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.423717   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.424468   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.424482   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.424611   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.424621   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.430968   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.430989   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.431053   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.431094   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.431467   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.431896   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.431928   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.432398   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.432434   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.432449   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.432469   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.451774   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0115 09:27:50.451803   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39659
	I0115 09:27:50.451898   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0115 09:27:50.451943   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0115 09:27:50.452311   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.452322   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.452688   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.452864   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.452883   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.452950   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46195
	I0115 09:27:50.453307   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.453467   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.453489   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.453619   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.453636   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.453856   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.453880   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.453938   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.453987   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.454408   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.454453   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.454425   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.454834   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.454874   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.454878   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.454902   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.455076   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.455723   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.455746   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.456859   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
	I0115 09:27:50.456938   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.456970   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.457826   14218 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-732359"
	I0115 09:27:50.457871   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.458247   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.458292   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.459722   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.460250   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.460282   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.462097   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.462920   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.462947   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.463325   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.463506   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.464658   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0115 09:27:50.464974   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.465066   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.465200   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44675
	I0115 09:27:50.465365   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
	I0115 09:27:50.465420   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.465459   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.466038   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.466054   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.466119   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.466182   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.467104   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.467238   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.467250   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.467367   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.467380   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.468121   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.468205   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I0115 09:27:50.468312   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.468425   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.468469   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.468796   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.469317   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.469334   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.469778   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.469833   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0115 09:27:50.470091   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.470115   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.470136   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.470598   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.471207   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.471223   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.472012   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.474455   14218 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0115 09:27:50.472548   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.472575   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.476452   14218 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 09:27:50.476465   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0115 09:27:50.476484   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.479147   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.479199   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.480103   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.481877   14218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:27:50.483410   14218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:27:50.483426   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 09:27:50.483452   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.481393   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46741
	I0115 09:27:50.481780   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.483596   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.483618   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.482384   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.484391   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.484580   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.484651   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.485101   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.485747   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.485762   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.486169   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.486237   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.486851   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.486873   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.487083   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.487149   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.487164   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.487190   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36463
	I0115 09:27:50.487865   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.487920   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.488098   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.488256   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.488883   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.488899   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.489234   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.489437   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.491443   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.493671   14218 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 09:27:50.495279   14218 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0115 09:27:50.494353   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42971
	I0115 09:27:50.497137   14218 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 09:27:50.498767   14218 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 09:27:50.498784   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0115 09:27:50.498802   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.497638   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.499826   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.499843   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.500247   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.500849   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.500875   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.502034   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.502586   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.502616   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.502774   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.502983   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.503155   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.503366   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.503930   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0115 09:27:50.504656   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0115 09:27:50.504804   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I0115 09:27:50.505201   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.505704   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.505718   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.506128   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.506349   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.506964   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0115 09:27:50.507877   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.507945   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.508701   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.508717   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.508848   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.508859   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.509135   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I0115 09:27:50.509271   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.509778   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.509840   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I0115 09:27:50.509967   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.510940   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.510977   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.511160   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.511246   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.512148   14218 addons.go:234] Setting addon default-storageclass=true in "addons-732359"
	I0115 09:27:50.512178   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:50.512541   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.512571   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.512767   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.512949   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.512960   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.513024   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0115 09:27:50.513097   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I0115 09:27:50.513161   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I0115 09:27:50.513290   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.513300   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.514520   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.514586   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.514634   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.516715   14218 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0115 09:27:50.516791   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.515618   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.515883   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.516166   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.516418   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.515018   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.518742   14218 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0115 09:27:50.520605   14218 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0115 09:27:50.518788   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.522144   14218 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0115 09:27:50.518835   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.518920   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0115 09:27:50.519066   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.519092   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.519604   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.518826   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.520993   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.523879   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.526236   14218 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0115 09:27:50.525017   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.525048   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I0115 09:27:50.525072   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.525154   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.525993   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.526053   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.526505   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.527209   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.528906   14218 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0115 09:27:50.527821   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.527913   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.527995   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.528507   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.529687   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.530062   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.530977   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I0115 09:27:50.531481   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0115 09:27:50.531967   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.531998   14218 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0115 09:27:50.533416   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.533510   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.534517   14218 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0115 09:27:50.536459   14218 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 09:27:50.536472   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0115 09:27:50.536484   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.537613   14218 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0115 09:27:50.534972   14218 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0115 09:27:50.534993   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.534532   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.535503   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.535643   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.536821   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I0115 09:27:50.539116   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.539165   14218 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0115 09:27:50.539179   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.539635   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.540924   14218 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0115 09:27:50.541303   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.541370   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.542252   14218 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0115 09:27:50.542265   14218 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0115 09:27:50.542376   14218 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0115 09:27:50.542852   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.545096   14218 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0115 09:27:50.543811   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.543826   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0115 09:27:50.543827   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0115 09:27:50.543867   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.542912   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.543830   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.544027   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.544040   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.544186   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.546547   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.546557   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.546599   14218 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0115 09:27:50.546867   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.548164   14218 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 09:27:50.548591   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.550210   14218 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0115 09:27:50.550245   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.550667   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.551584   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0115 09:27:50.551611   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.551627   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 09:27:50.553570   14218 out.go:177]   - Using image docker.io/registry:2.8.3
	I0115 09:27:50.551642   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.551589   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.551672   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0115 09:27:50.551995   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.552001   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.552023   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.552976   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:50.554905   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:50.554921   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.555050   14218 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0115 09:27:50.555058   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0115 09:27:50.555070   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.556219   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.557493   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.557759   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.557964   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.558174   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.558378   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.559718   14218 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0115 09:27:50.558646   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.559180   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.559184   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.559511   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.560697   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.561242   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.561267   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.561270   14218 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0115 09:27:50.561280   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0115 09:27:50.561283   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.561293   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.561302   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.561170   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.561201   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.561525   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.561551   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.561525   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.562062   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.562071   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.562084   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.562064   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.562149   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.562175   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.562289   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.562304   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.562320   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.562361   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.562711   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.562785   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.564415   14218 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0115 09:27:50.565493   14218 out.go:177]   - Using image docker.io/busybox:stable
	I0115 09:27:50.564509   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.563056   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.563237   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.563051   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.564773   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.565812   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.565884   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.566327   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.566969   14218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 09:27:50.566979   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0115 09:27:50.566990   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.566995   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.567020   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.567044   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.567065   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.567072   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.567100   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.567122   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.567206   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.567274   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.567362   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.568904   14218 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0115 09:27:50.567482   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.567492   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.567611   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.570238   14218 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0115 09:27:50.570244   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.570255   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0115 09:27:50.570277   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.570337   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.570514   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.570868   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.570932   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.570947   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.571097   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.571286   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.571448   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:50.572920   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.573237   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.573265   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.573383   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.573593   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.573736   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	W0115 09:27:50.573866   14218 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48370->192.168.39.21:22: read: connection reset by peer
	I0115 09:27:50.573894   14218 retry.go:31] will retry after 175.212507ms: ssh: handshake failed: read tcp 192.168.39.1:48370->192.168.39.21:22: read: connection reset by peer
	I0115 09:27:50.573940   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	W0115 09:27:50.574727   14218 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48374->192.168.39.21:22: read: connection reset by peer
	I0115 09:27:50.574772   14218 retry.go:31] will retry after 275.508928ms: ssh: handshake failed: read tcp 192.168.39.1:48374->192.168.39.21:22: read: connection reset by peer
	I0115 09:27:50.576635   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0115 09:27:50.576941   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:50.577434   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:50.577454   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:50.577741   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:50.577928   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:50.579106   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:50.579334   14218 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 09:27:50.579353   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 09:27:50.579368   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:50.581584   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.581910   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:50.581940   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:50.582132   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:50.582272   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:50.582409   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:50.582603   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	W0115 09:27:50.583331   14218 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0115 09:27:50.583356   14218 retry.go:31] will retry after 300.331766ms: ssh: handshake failed: EOF
	I0115 09:27:50.684297   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:27:50.714622   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 09:27:50.746053   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 09:27:50.799485   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 09:27:50.827676   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0115 09:27:50.828245   14218 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0115 09:27:50.828261   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0115 09:27:50.832708   14218 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0115 09:27:50.832724   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0115 09:27:50.847090   14218 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0115 09:27:50.847114   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0115 09:27:50.851252   14218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 09:27:50.874352   14218 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0115 09:27:50.874371   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0115 09:27:50.928954   14218 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0115 09:27:50.928973   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0115 09:27:50.938778   14218 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-732359" context rescaled to 1 replicas
	I0115 09:27:50.938812   14218 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:27:50.940800   14218 out.go:177] * Verifying Kubernetes components...
	I0115 09:27:50.942521   14218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:27:50.948517   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 09:27:50.998763   14218 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0115 09:27:50.998781   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0115 09:27:51.048400   14218 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 09:27:51.048427   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0115 09:27:51.054841   14218 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0115 09:27:51.054857   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0115 09:27:51.079663   14218 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0115 09:27:51.079684   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0115 09:27:51.211587   14218 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0115 09:27:51.211609   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0115 09:27:51.263671   14218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 09:27:51.263695   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0115 09:27:51.288469   14218 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0115 09:27:51.288488   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0115 09:27:51.295008   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 09:27:51.307442   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0115 09:27:51.332599   14218 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0115 09:27:51.332622   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0115 09:27:51.372584   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 09:27:51.384197   14218 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0115 09:27:51.384222   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0115 09:27:51.441142   14218 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0115 09:27:51.441166   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0115 09:27:51.476446   14218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 09:27:51.476472   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 09:27:51.495883   14218 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0115 09:27:51.495915   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0115 09:27:51.530840   14218 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0115 09:27:51.530877   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0115 09:27:51.561580   14218 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0115 09:27:51.561598   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0115 09:27:51.604547   14218 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0115 09:27:51.604607   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0115 09:27:51.628610   14218 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0115 09:27:51.628631   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0115 09:27:51.639597   14218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 09:27:51.639616   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 09:27:51.643134   14218 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 09:27:51.643153   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0115 09:27:51.668396   14218 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0115 09:27:51.668411   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0115 09:27:51.702069   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0115 09:27:51.733768   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 09:27:51.748712   14218 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0115 09:27:51.748744   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0115 09:27:51.760788   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 09:27:51.781117   14218 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0115 09:27:51.781146   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0115 09:27:51.832300   14218 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0115 09:27:51.832330   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0115 09:27:51.846123   14218 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0115 09:27:51.846141   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0115 09:27:51.915276   14218 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0115 09:27:51.915307   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0115 09:27:51.926766   14218 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0115 09:27:51.926788   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0115 09:27:51.979237   14218 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0115 09:27:51.979265   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0115 09:27:51.999657   14218 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 09:27:51.999681   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0115 09:27:52.044725   14218 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 09:27:52.044746   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0115 09:27:52.058372   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 09:27:52.081800   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 09:27:57.193865   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.509529242s)
	I0115 09:27:57.193932   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:57.193952   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:57.194356   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:57.194383   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:57.194394   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:57.194404   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:57.194718   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:57.194722   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:57.194741   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:57.569373   14218 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0115 09:27:57.569418   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:57.572503   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:57.572873   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:57.572904   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:57.573113   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:57.573328   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:57.573494   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:57.573648   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:57.780062   14218 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0115 09:27:57.840925   14218 addons.go:234] Setting addon gcp-auth=true in "addons-732359"
	I0115 09:27:57.840980   14218 host.go:66] Checking if "addons-732359" exists ...
	I0115 09:27:57.841310   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:57.841350   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:57.873408   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41735
	I0115 09:27:57.873913   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:57.874383   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:57.874410   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:57.874751   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:57.875323   14218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:27:57.875370   14218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:27:57.890964   14218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0115 09:27:57.891351   14218 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:27:57.891765   14218 main.go:141] libmachine: Using API Version  1
	I0115 09:27:57.891790   14218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:27:57.892121   14218 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:27:57.892292   14218 main.go:141] libmachine: (addons-732359) Calling .GetState
	I0115 09:27:57.893828   14218 main.go:141] libmachine: (addons-732359) Calling .DriverName
	I0115 09:27:57.894058   14218 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0115 09:27:57.894087   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHHostname
	I0115 09:27:57.897555   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:57.898068   14218 main.go:141] libmachine: (addons-732359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:91:7c", ip: ""} in network mk-addons-732359: {Iface:virbr1 ExpiryTime:2024-01-15 10:27:09 +0000 UTC Type:0 Mac:52:54:00:77:91:7c Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:addons-732359 Clientid:01:52:54:00:77:91:7c}
	I0115 09:27:57.898103   14218 main.go:141] libmachine: (addons-732359) DBG | domain addons-732359 has defined IP address 192.168.39.21 and MAC address 52:54:00:77:91:7c in network mk-addons-732359
	I0115 09:27:57.898292   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHPort
	I0115 09:27:57.898482   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHKeyPath
	I0115 09:27:57.898652   14218 main.go:141] libmachine: (addons-732359) Calling .GetSSHUsername
	I0115 09:27:57.898815   14218 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/addons-732359/id_rsa Username:docker}
	I0115 09:27:59.323519   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.608857818s)
	I0115 09:27:59.323571   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.323587   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.323613   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.524100201s)
	I0115 09:27:59.323570   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.577487853s)
	I0115 09:27:59.323652   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.323664   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.323666   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.323732   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.323791   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.496085728s)
	I0115 09:27:59.323820   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.323818   14218 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.472532812s)
	I0115 09:27:59.323834   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.323847   14218 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0115 09:27:59.323857   14218 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (8.381313672s)
	I0115 09:27:59.323900   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.375366083s)
	I0115 09:27:59.323919   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.323929   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.323983   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.324006   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.324023   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.028991521s)
	I0115 09:27:59.324023   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.324035   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.324039   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324043   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.324048   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.324051   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.324049   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.324055   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324057   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324062   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.324065   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.324067   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.324071   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324080   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.324110   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.016639722s)
	I0115 09:27:59.324025   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.324127   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324137   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.324161   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.324185   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.951578434s)
	I0115 09:27:59.324191   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.324201   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.324202   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324209   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324211   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.324218   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.324270   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.622178448s)
	I0115 09:27:59.324284   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324293   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.324319   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.324346   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.324354   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.324363   14218 addons.go:470] Verifying addon ingress=true in "addons-732359"
	I0115 09:27:59.324385   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.590591304s)
	I0115 09:27:59.324400   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324424   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.328481   14218 out.go:177] * Verifying ingress addon...
	I0115 09:27:59.324716   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.324738   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330095   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.324755   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.330107   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324774   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330121   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.330131   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.324788   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.330145   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324805   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330156   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.330176   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.330188   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.324824   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.324841   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330208   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.324858   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.324871   14218 node_ready.go:35] waiting up to 6m0s for node "addons-732359" to be "Ready" ...
	I0115 09:27:59.330331   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.324932   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330359   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.330362   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.324954   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.330379   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330388   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.330429   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.324996   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.564156213s)
	I0115 09:27:59.330452   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	W0115 09:27:59.330492   14218 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 09:27:59.330509   14218 retry.go:31] will retry after 136.41024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 09:27:59.325069   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.266669669s)
	I0115 09:27:59.330559   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.330568   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.326388   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330589   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.330596   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.330604   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.326427   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.326561   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.326597   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330661   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.330673   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.330684   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.326622   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.326643   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330741   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.328886   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330776   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.330785   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.330793   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.330196   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.330863   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.330871   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.330894   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330903   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.330911   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.330920   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.330945   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.330954   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.330961   14218 addons.go:470] Verifying addon registry=true in "addons-732359"
	I0115 09:27:59.332407   14218 out.go:177] * Verifying registry addon...
	I0115 09:27:59.331123   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.331144   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.331283   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.331298   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.331320   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.331336   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.332014   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.332037   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.333543   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.333555   14218 addons.go:470] Verifying addon metrics-server=true in "addons-732359"
	I0115 09:27:59.333603   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.333629   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.334984   14218 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-732359 service yakd-dashboard -n yakd-dashboard
	
	I0115 09:27:59.333661   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.333708   14218 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0115 09:27:59.334258   14218 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0115 09:27:59.364295   14218 node_ready.go:49] node "addons-732359" has status "Ready":"True"
	I0115 09:27:59.364319   14218 node_ready.go:38] duration metric: took 34.058526ms waiting for node "addons-732359" to be "Ready" ...
	I0115 09:27:59.364330   14218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:27:59.366688   14218 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0115 09:27:59.366708   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:59.376475   14218 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 09:27:59.376556   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:59.399784   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.399821   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.400067   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.400107   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.400117   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	W0115 09:27:59.400362   14218 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0115 09:27:59.427289   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:27:59.427317   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:27:59.427669   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:27:59.427689   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:27:59.427710   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:27:59.429939   14218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace to be "Ready" ...
	I0115 09:27:59.467361   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 09:28:00.011254   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:00.011533   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:00.357934   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:00.358008   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:00.369861   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.288017586s)
	I0115 09:28:00.369905   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:28:00.369921   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:28:00.369937   14218 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.475854223s)
	I0115 09:28:00.372079   14218 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 09:28:00.370239   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:28:00.370260   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:28:00.373415   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:28:00.373428   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:28:00.373439   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:28:00.374902   14218 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0115 09:28:00.373784   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:28:00.373754   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:28:00.376437   14218 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0115 09:28:00.376456   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0115 09:28:00.374940   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:28:00.376480   14218 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-732359"
	I0115 09:28:00.377969   14218 out.go:177] * Verifying csi-hostpath-driver addon...
	I0115 09:28:00.379990   14218 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0115 09:28:00.404950   14218 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 09:28:00.404969   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:00.413686   14218 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0115 09:28:00.413703   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0115 09:28:00.501037   14218 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 09:28:00.501063   14218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0115 09:28:00.603037   14218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 09:28:00.876696   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:00.878076   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:00.919179   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:01.343013   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:01.348560   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:01.394652   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:01.444768   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:01.848761   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:01.849052   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:01.896754   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:02.131029   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.663625699s)
	I0115 09:28:02.131093   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:28:02.131108   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:28:02.131477   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:28:02.131478   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:28:02.131505   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:28:02.131520   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:28:02.131530   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:28:02.131767   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:28:02.131786   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:28:02.131827   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:28:02.362179   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:02.367639   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:02.425901   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:02.509786   14218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.906711554s)
	I0115 09:28:02.509831   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:28:02.509856   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:28:02.510159   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:28:02.510204   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:28:02.510231   14218 main.go:141] libmachine: Making call to close driver server
	I0115 09:28:02.510252   14218 main.go:141] libmachine: (addons-732359) Calling .Close
	I0115 09:28:02.510207   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:28:02.510528   14218 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:28:02.510545   14218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:28:02.510556   14218 main.go:141] libmachine: (addons-732359) DBG | Closing plugin on server side
	I0115 09:28:02.511513   14218 addons.go:470] Verifying addon gcp-auth=true in "addons-732359"
	I0115 09:28:02.513129   14218 out.go:177] * Verifying gcp-auth addon...
	I0115 09:28:02.515642   14218 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0115 09:28:02.546366   14218 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0115 09:28:02.546389   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:02.867440   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:02.867892   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:02.900041   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:03.024193   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:03.347514   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:03.348912   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:03.393576   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:03.532739   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:03.842593   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:03.844647   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:03.885699   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:03.938733   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:04.027067   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:04.345048   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:04.345177   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:04.387344   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:04.519092   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:04.843435   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:04.845232   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:04.886310   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:05.019412   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:05.342582   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:05.343743   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:05.385729   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:05.521351   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:05.843127   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:05.844325   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:05.886981   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:05.942753   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:06.021930   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:06.342435   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:06.342565   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:06.386040   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:06.524667   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:06.841131   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:06.849069   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:06.886424   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:07.037295   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:07.350530   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:07.350965   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:07.385313   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:07.522338   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:07.843888   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:07.844216   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:07.899224   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:07.955337   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:08.031344   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:08.342371   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:08.343922   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:08.387083   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:08.520108   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:08.853291   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:08.854365   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:08.886475   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:09.021033   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:09.367780   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:09.368017   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:09.689343   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:09.694043   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:09.845816   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:09.845948   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:09.888117   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:10.035667   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:10.344921   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:10.344991   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:10.386271   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:10.438397   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:10.518673   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:10.843068   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:10.844477   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:10.892619   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:11.020775   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:11.344943   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:11.345302   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:11.387610   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:11.521440   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:11.847589   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:11.847784   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:11.892685   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:12.019929   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:12.344051   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:12.344253   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:12.390457   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:12.444076   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:12.522064   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:12.844117   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:12.845887   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:12.887371   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:13.034358   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:13.361337   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:13.368462   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:13.391542   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:13.525249   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:13.847157   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:13.848242   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:13.906627   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:14.020186   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:14.350130   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:14.350291   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:14.399158   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:14.447779   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:14.520671   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:14.846248   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:14.849328   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:14.889030   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:15.026910   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:15.353675   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:15.372016   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:15.406532   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:15.532209   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:15.842372   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:15.843247   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:15.887997   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:16.020627   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:16.344398   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:16.344633   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:16.386060   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:16.521337   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:16.841608   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:16.843781   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:16.886366   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:16.938813   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:17.019004   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:17.343868   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:17.343995   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:17.386911   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:17.520599   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:17.841765   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:17.842812   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:17.889509   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:18.020197   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:18.343274   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:18.343912   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:18.385815   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:18.519626   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:18.849265   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:18.851667   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:18.886527   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:19.019868   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:19.351606   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:19.352487   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:19.385592   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:19.437872   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:19.522042   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:19.842696   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:19.843044   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:19.886201   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:20.020371   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:20.352336   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:20.356079   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:20.424499   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:20.532086   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:20.841161   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:20.841710   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:20.892617   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:21.020298   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:21.342499   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:21.342983   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:21.386187   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:21.444915   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:21.519658   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:21.842662   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:21.844210   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:21.886013   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:22.019435   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:22.341604   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:22.342017   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:22.387287   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:22.521731   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:23.041041   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:23.047594   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:23.047700   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:23.053531   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:23.341705   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:23.344054   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:23.386916   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:23.520363   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:23.844505   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:23.844556   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:23.888118   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:23.938121   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:24.019870   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:24.343678   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:24.347408   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:24.385882   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:24.519306   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:24.841347   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:24.841379   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:24.885718   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:25.020630   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:25.343596   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:25.344141   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:25.385679   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:25.520841   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:25.842773   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:25.842910   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:25.886156   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:26.020977   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:26.344640   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:26.344657   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:26.387978   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:26.437082   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:26.519213   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:26.841875   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:26.852403   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:26.889884   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:27.020409   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:27.348332   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:27.348411   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:27.394027   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:27.519989   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:27.843067   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:27.843393   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:27.885624   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:28.020658   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:28.342260   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:28.345451   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:28.390770   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:28.441490   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:28.538191   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:28.842350   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:28.842471   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:28.885525   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:29.020189   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:29.347970   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:29.348922   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:29.389492   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:29.521978   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:29.843561   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:29.844016   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:29.885089   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:30.019780   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:30.343484   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:30.344085   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:30.385361   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:30.443368   14218 pod_ready.go:102] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:30.522463   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:30.843645   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:30.843772   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:30.892585   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:31.021390   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:31.342019   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:31.343231   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:31.386110   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:31.521686   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:31.843527   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:31.843660   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:31.885209   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:32.019241   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:32.342774   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:32.343588   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:32.388341   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:32.439721   14218 pod_ready.go:92] pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:32.439744   14218 pod_ready.go:81] duration metric: took 33.009786119s waiting for pod "coredns-5dd5756b68-vrnpk" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.439754   14218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-732359" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.449727   14218 pod_ready.go:92] pod "etcd-addons-732359" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:32.449747   14218 pod_ready.go:81] duration metric: took 9.98701ms waiting for pod "etcd-addons-732359" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.449757   14218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-732359" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.454864   14218 pod_ready.go:92] pod "kube-apiserver-addons-732359" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:32.454881   14218 pod_ready.go:81] duration metric: took 5.115782ms waiting for pod "kube-apiserver-addons-732359" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.454892   14218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-732359" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.459543   14218 pod_ready.go:92] pod "kube-controller-manager-addons-732359" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:32.459566   14218 pod_ready.go:81] duration metric: took 4.665682ms waiting for pod "kube-controller-manager-addons-732359" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.459583   14218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hjm66" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.465194   14218 pod_ready.go:92] pod "kube-proxy-hjm66" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:32.465210   14218 pod_ready.go:81] duration metric: took 5.619998ms waiting for pod "kube-proxy-hjm66" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.465222   14218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-732359" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.519055   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:32.834032   14218 pod_ready.go:92] pod "kube-scheduler-addons-732359" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:32.834053   14218 pod_ready.go:81] duration metric: took 368.825158ms waiting for pod "kube-scheduler-addons-732359" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.834063   14218 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-27qc5" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:32.844229   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:32.844915   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:32.886979   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:33.020515   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:33.341864   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:33.342957   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:33.386810   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:33.671608   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:33.841005   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:33.842350   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:33.885496   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:34.020034   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:34.343593   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:34.343699   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:34.387644   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:34.529707   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:34.844127   14218 pod_ready.go:102] pod "metrics-server-7c66d45ddc-27qc5" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:34.844670   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:34.845661   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:34.884961   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:35.019693   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:35.361558   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:35.361665   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:35.386289   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:35.519948   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:35.891878   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:35.892094   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:35.895932   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:36.019870   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:36.344823   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:36.347018   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:36.391655   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:36.527026   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:36.846400   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:36.847965   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:36.856124   14218 pod_ready.go:102] pod "metrics-server-7c66d45ddc-27qc5" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:36.895514   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:37.022089   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:37.343142   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:37.351079   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:37.386440   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:37.524032   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:37.992252   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:37.999506   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:38.005123   14218 pod_ready.go:92] pod "metrics-server-7c66d45ddc-27qc5" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:38.005154   14218 pod_ready.go:81] duration metric: took 5.171083949s waiting for pod "metrics-server-7c66d45ddc-27qc5" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:38.005170   14218 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tghvb" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:38.009304   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:38.050717   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:38.346379   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:38.346874   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:38.386915   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:38.520045   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:38.842605   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:38.843537   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:38.886990   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:39.018570   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:39.346832   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:39.348494   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:39.386686   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:39.518904   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:39.843370   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:39.844317   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:39.888483   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:40.013014   14218 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-tghvb" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:40.021449   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:40.342154   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:40.342527   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:40.385607   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:40.521805   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:40.843158   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:40.845658   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:40.885905   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:41.021843   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:41.341955   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:41.342211   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:41.386951   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:41.518976   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:41.842486   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:41.842637   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:41.885793   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:42.021041   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:42.343411   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:42.343670   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:42.386168   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:42.511902   14218 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-tghvb" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:42.521195   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:42.845902   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:42.846938   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:42.887496   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:43.020017   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:43.342300   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:43.342963   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:43.386252   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:43.519773   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:43.843664   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:43.846938   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:43.891382   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:44.019351   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:44.342083   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:44.345461   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:44.387662   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:44.512105   14218 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-tghvb" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:44.521055   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:44.845615   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:44.850994   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:44.885897   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:45.018970   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:45.342861   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:45.344911   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:45.386035   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:45.518695   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:45.844145   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:45.845971   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:45.886140   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:46.021068   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:46.343849   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:46.344692   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:46.386273   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:46.512432   14218 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-tghvb" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:46.519282   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:46.843049   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:46.843661   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:46.886515   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:47.020736   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:47.542667   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:47.543535   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:47.543863   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:47.546000   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:47.843366   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:47.845429   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:47.887288   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:48.018710   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:48.342917   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:48.345136   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:48.386728   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:48.512667   14218 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-tghvb" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:48.520299   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:48.842255   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:48.843533   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:48.963715   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:49.018622   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:49.353418   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:49.376286   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:49.403663   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:49.521046   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:49.842278   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:49.843273   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:49.889875   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:50.011539   14218 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-tghvb" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:50.011562   14218 pod_ready.go:81] duration metric: took 12.006382325s waiting for pod "nvidia-device-plugin-daemonset-tghvb" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:50.011585   14218 pod_ready.go:38] duration metric: took 50.647242434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:28:50.011604   14218 api_server.go:52] waiting for apiserver process to appear ...
	I0115 09:28:50.011668   14218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 09:28:50.019109   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:50.033542   14218 api_server.go:72] duration metric: took 59.09469085s to wait for apiserver process to appear ...
	I0115 09:28:50.033564   14218 api_server.go:88] waiting for apiserver healthz status ...
	I0115 09:28:50.033587   14218 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0115 09:28:50.038619   14218 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0115 09:28:50.039944   14218 api_server.go:141] control plane version: v1.28.4
	I0115 09:28:50.039967   14218 api_server.go:131] duration metric: took 6.394957ms to wait for apiserver health ...
	I0115 09:28:50.039978   14218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 09:28:50.049949   14218 system_pods.go:59] 18 kube-system pods found
	I0115 09:28:50.049978   14218 system_pods.go:61] "coredns-5dd5756b68-vrnpk" [54e76c1d-5104-4438-adf0-c981082259f0] Running
	I0115 09:28:50.049986   14218 system_pods.go:61] "csi-hostpath-attacher-0" [1c23d5e0-daee-4ba1-ab25-e6620f1b6c02] Running
	I0115 09:28:50.049992   14218 system_pods.go:61] "csi-hostpath-resizer-0" [2606a598-c9a5-42b5-9926-0f00c42d92f0] Running
	I0115 09:28:50.050003   14218 system_pods.go:61] "csi-hostpathplugin-rgvzj" [51400031-7eb4-4a57-978d-afd2c6e17305] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 09:28:50.050011   14218 system_pods.go:61] "etcd-addons-732359" [9af3940b-7d4a-4cda-99b5-73312bd81df9] Running
	I0115 09:28:50.050019   14218 system_pods.go:61] "kube-apiserver-addons-732359" [1471c069-c9be-4c73-a58e-db9ab7efeaa0] Running
	I0115 09:28:50.050027   14218 system_pods.go:61] "kube-controller-manager-addons-732359" [fe73412e-37a7-475f-bf18-833ebf314bb1] Running
	I0115 09:28:50.050043   14218 system_pods.go:61] "kube-ingress-dns-minikube" [516abb0c-e072-45c0-a45c-88d7fd266a0c] Running
	I0115 09:28:50.050049   14218 system_pods.go:61] "kube-proxy-hjm66" [4ae0acdb-a612-4262-b2f8-294bf277ce7c] Running
	I0115 09:28:50.050056   14218 system_pods.go:61] "kube-scheduler-addons-732359" [74d4fbe2-cbc5-4b6b-8cb3-d88cad7e2a4a] Running
	I0115 09:28:50.050066   14218 system_pods.go:61] "metrics-server-7c66d45ddc-27qc5" [1ecc618d-a070-4472-8f68-a2c66a387805] Running
	I0115 09:28:50.050076   14218 system_pods.go:61] "nvidia-device-plugin-daemonset-tghvb" [bc860577-d720-42df-8ecd-e81df841a4d1] Running
	I0115 09:28:50.050083   14218 system_pods.go:61] "registry-k5ln6" [45857e37-425a-4aaf-8eff-8045af09133f] Running
	I0115 09:28:50.050090   14218 system_pods.go:61] "registry-proxy-mm4zk" [a7a11774-4ce1-44cc-8c52-48e10c08ab41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0115 09:28:50.050101   14218 system_pods.go:61] "snapshot-controller-58dbcc7b99-g2zf6" [04532391-01ba-4588-ad1a-837df34baadb] Running
	I0115 09:28:50.050115   14218 system_pods.go:61] "snapshot-controller-58dbcc7b99-tccsk" [c246adc4-00b3-4569-bce3-a1049f0d0c00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0115 09:28:50.050125   14218 system_pods.go:61] "storage-provisioner" [62eab5d1-282c-40fe-9832-94f244accb57] Running
	I0115 09:28:50.050135   14218 system_pods.go:61] "tiller-deploy-7b677967b9-vhknn" [77b631a8-d1fb-4ad4-82e3-60df11d8591c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0115 09:28:50.050145   14218 system_pods.go:74] duration metric: took 10.160844ms to wait for pod list to return data ...
	I0115 09:28:50.050158   14218 default_sa.go:34] waiting for default service account to be created ...
	I0115 09:28:50.052806   14218 default_sa.go:45] found service account: "default"
	I0115 09:28:50.052823   14218 default_sa.go:55] duration metric: took 2.656959ms for default service account to be created ...
	I0115 09:28:50.052832   14218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 09:28:50.069747   14218 system_pods.go:86] 18 kube-system pods found
	I0115 09:28:50.069775   14218 system_pods.go:89] "coredns-5dd5756b68-vrnpk" [54e76c1d-5104-4438-adf0-c981082259f0] Running
	I0115 09:28:50.069781   14218 system_pods.go:89] "csi-hostpath-attacher-0" [1c23d5e0-daee-4ba1-ab25-e6620f1b6c02] Running
	I0115 09:28:50.069786   14218 system_pods.go:89] "csi-hostpath-resizer-0" [2606a598-c9a5-42b5-9926-0f00c42d92f0] Running
	I0115 09:28:50.069793   14218 system_pods.go:89] "csi-hostpathplugin-rgvzj" [51400031-7eb4-4a57-978d-afd2c6e17305] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 09:28:50.069802   14218 system_pods.go:89] "etcd-addons-732359" [9af3940b-7d4a-4cda-99b5-73312bd81df9] Running
	I0115 09:28:50.069811   14218 system_pods.go:89] "kube-apiserver-addons-732359" [1471c069-c9be-4c73-a58e-db9ab7efeaa0] Running
	I0115 09:28:50.069823   14218 system_pods.go:89] "kube-controller-manager-addons-732359" [fe73412e-37a7-475f-bf18-833ebf314bb1] Running
	I0115 09:28:50.069836   14218 system_pods.go:89] "kube-ingress-dns-minikube" [516abb0c-e072-45c0-a45c-88d7fd266a0c] Running
	I0115 09:28:50.069845   14218 system_pods.go:89] "kube-proxy-hjm66" [4ae0acdb-a612-4262-b2f8-294bf277ce7c] Running
	I0115 09:28:50.069857   14218 system_pods.go:89] "kube-scheduler-addons-732359" [74d4fbe2-cbc5-4b6b-8cb3-d88cad7e2a4a] Running
	I0115 09:28:50.069868   14218 system_pods.go:89] "metrics-server-7c66d45ddc-27qc5" [1ecc618d-a070-4472-8f68-a2c66a387805] Running
	I0115 09:28:50.069878   14218 system_pods.go:89] "nvidia-device-plugin-daemonset-tghvb" [bc860577-d720-42df-8ecd-e81df841a4d1] Running
	I0115 09:28:50.069884   14218 system_pods.go:89] "registry-k5ln6" [45857e37-425a-4aaf-8eff-8045af09133f] Running
	I0115 09:28:50.069897   14218 system_pods.go:89] "registry-proxy-mm4zk" [a7a11774-4ce1-44cc-8c52-48e10c08ab41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0115 09:28:50.069904   14218 system_pods.go:89] "snapshot-controller-58dbcc7b99-g2zf6" [04532391-01ba-4588-ad1a-837df34baadb] Running
	I0115 09:28:50.069911   14218 system_pods.go:89] "snapshot-controller-58dbcc7b99-tccsk" [c246adc4-00b3-4569-bce3-a1049f0d0c00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0115 09:28:50.069918   14218 system_pods.go:89] "storage-provisioner" [62eab5d1-282c-40fe-9832-94f244accb57] Running
	I0115 09:28:50.069924   14218 system_pods.go:89] "tiller-deploy-7b677967b9-vhknn" [77b631a8-d1fb-4ad4-82e3-60df11d8591c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0115 09:28:50.069931   14218 system_pods.go:126] duration metric: took 17.094383ms to wait for k8s-apps to be running ...
	I0115 09:28:50.069941   14218 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 09:28:50.069984   14218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:28:50.112568   14218 system_svc.go:56] duration metric: took 42.618134ms WaitForService to wait for kubelet.
	I0115 09:28:50.112602   14218 kubeadm.go:581] duration metric: took 59.173756899s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 09:28:50.112621   14218 node_conditions.go:102] verifying NodePressure condition ...
	I0115 09:28:50.116372   14218 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 09:28:50.116409   14218 node_conditions.go:123] node cpu capacity is 2
	I0115 09:28:50.116425   14218 node_conditions.go:105] duration metric: took 3.799716ms to run NodePressure ...
	I0115 09:28:50.116441   14218 start.go:228] waiting for startup goroutines ...
	I0115 09:28:50.342946   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:50.344623   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:50.388320   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:50.521226   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:50.844681   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:50.846188   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:50.889002   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:51.020113   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:51.342254   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:51.342433   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:51.387103   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:51.519991   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:51.842760   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:51.843357   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:51.885597   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:52.019302   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:52.342276   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:52.342636   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:52.385830   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:52.520373   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:52.845366   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:52.845735   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:52.889224   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:53.021999   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:53.346216   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:53.349948   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:53.398629   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:53.526932   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:53.844408   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:53.849029   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:53.906164   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:54.020449   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:54.358108   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:54.358153   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:54.386391   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:54.519194   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:54.843932   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:54.844274   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:54.887432   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:55.023238   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:55.342003   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:55.343154   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:55.386740   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:55.519953   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:55.844861   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:55.847362   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:55.885845   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:56.020102   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:56.342045   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:56.342801   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:56.386210   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:56.520164   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:56.842106   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:56.842199   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:56.889682   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:57.020343   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:57.344444   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:57.344667   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:57.384952   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:57.520302   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:57.842065   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:57.842913   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:57.886961   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:58.020777   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:58.343670   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:58.352585   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:58.390006   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:58.527458   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:58.842537   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:58.843508   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:58.885930   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:59.020719   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:59.341807   14218 kapi.go:107] duration metric: took 1m0.007543683s to wait for kubernetes.io/minikube-addons=registry ...
	I0115 09:28:59.341949   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:59.386203   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:59.520026   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:59.842422   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:59.887595   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:00.020373   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:00.341950   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:00.386520   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:00.520151   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:00.841744   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:00.887083   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:01.085211   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:01.341664   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:01.386528   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:01.520361   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:01.842516   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:01.887846   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:02.021070   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:02.344303   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:02.389290   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:02.520233   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:02.844289   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:02.895098   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:03.021131   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:03.342232   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:03.386780   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:03.526688   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:03.930601   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:03.930998   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:04.021289   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:04.347672   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:04.385459   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:04.530944   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:04.843285   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:04.909355   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:05.019941   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:05.343692   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:05.398712   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:05.519501   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:05.842174   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:05.886328   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:06.020070   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:06.342249   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:06.395248   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:06.520073   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:06.852689   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:06.891227   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:07.020349   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:07.342871   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:07.392487   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:07.519629   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:07.844733   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:07.888470   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:08.022481   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:08.366765   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:08.386927   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:08.520678   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:08.840985   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:08.892155   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:09.022932   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:09.341773   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:09.385587   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:09.519471   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:09.843590   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:09.887963   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:10.021199   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:10.341817   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:10.385722   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:10.519265   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:10.841097   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:10.886899   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:11.019812   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:11.341163   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:11.386280   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:11.525673   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:11.842108   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:11.887304   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:12.088805   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:12.341704   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:12.386573   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:12.519662   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:12.841259   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:12.886301   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:13.020334   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:13.342367   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:13.385626   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:13.519417   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:13.841475   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:13.887332   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:14.023837   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:14.340944   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:14.386132   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:14.520577   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:14.840767   14218 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:29:14.886266   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:15.020105   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:15.342047   14218 kapi.go:107] duration metric: took 1m16.008337709s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0115 09:29:15.387203   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:15.520539   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:15.886884   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:16.020415   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:16.386740   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:16.519282   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:16.886108   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:17.020283   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:17.392714   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:17.519629   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:29:17.885971   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:18.024906   14218 kapi.go:107] duration metric: took 1m15.509258413s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0115 09:29:18.026796   14218 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-732359 cluster.
	I0115 09:29:18.028321   14218 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0115 09:29:18.029743   14218 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0115 09:29:18.392976   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:18.886892   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:19.387267   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:19.885545   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:20.386175   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:20.886038   14218 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:21.386522   14218 kapi.go:107] duration metric: took 1m21.006529089s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0115 09:29:21.388428   14218 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, helm-tiller, cloud-spanner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0115 09:29:21.389824   14218 addons.go:505] enable addons completed in 1m30.99579782s: enabled=[storage-provisioner nvidia-device-plugin ingress-dns helm-tiller cloud-spanner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0115 09:29:21.389870   14218 start.go:233] waiting for cluster config update ...
	I0115 09:29:21.389896   14218 start.go:242] writing updated cluster config ...
	I0115 09:29:21.390144   14218 ssh_runner.go:195] Run: rm -f paused
	I0115 09:29:21.439304   14218 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 09:29:21.441269   14218 out.go:177] * Done! kubectl is now configured to use "addons-732359" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 09:27:05 UTC, ends at Mon 2024-01-15 09:32:20 UTC. --
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.435901535Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6\"" file="storage/storage_transport.go:185"
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.436235713Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\"" file="storage/storage_transport.go:185"
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.441141361Z" level=debug msg="exporting opaque data as blob \"sha256:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79\"" file="storage/storage_image.go:212"
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.442087164Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499 registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb],Size_:127226832,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232],Size_:123261750,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:e3db313c6dbc065d4ac3
b32c7a6f2a878949031b881d217b63881a109c5cfba1,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.4],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32],Size_:61551410,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,RepoTags:[registry.k8s.io/kube-proxy:v1.28.4],RepoDigests:[registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532],Size_:74749335,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34
c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15 registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3],Size_:295456551,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.
io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,RepoTags:[docker.io/kindest/kindnetd:v20230809-80a64d96],RepoDigests:[docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4],Size_:65258016,Uid:nil,Username:,Spec:nil,},&Image{Id:a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57,RepoTags:[],RepoDigests:[registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21 registry.k8s.io/metrics-server/metrics-server@sha256:ee4304963fb035239bb5c5e8c10f2f38ee80efc16ecbdb9feb7213c17ae2e86e],Size_:70330870,Uid:&Int64Value{Val
ue:65534,},Username:,Spec:nil,},&Image{Id:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,RepoTags:[],RepoDigests:[docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310 docker.io/marcnuri/yakd@sha256:e65e169e9a45f0fa8c0bb25f979481f4ed561aab48df856cba042a75dd34b0a9],Size_:204075024,Uid:&Int64Value{Value:10001,},Username:,Spec:nil,},&Image{Id:d378d53ef198dac0270a2861e7752267d41db8b5bc6e33fb7376fd77122fa43c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:249356252,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f],Size_:188129131,Uid:nil,Us
ername:,Spec:nil,},&Image{Id:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7],Size_:57899101,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:909c3ff012b7f9fc4b802b73f250ad45e4ffa385299b71fdd6813f70a6711792,RepoTags:[],RepoDigests:[docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86 docker.io/library/registry@sha256:860f379a011eddfab604d9acfe3cf50b2d6e958026fb0f977132b0b083b1a3d7],Size_:25961051,Uid:nil,Username:,Spec:nil,},&Image{Id:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e27
25ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b],Size_:57303140,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c],Size_:56980232,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:754854eab8c1c41bf733ba68c8bbae4cdc5806bd557d0c8c35f692d928489d75,RepoTags:[],RepoDigests:[gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49 gcr.io/cloud-spanner-emulator/emulator@sha256:7e0a9c24dddd7ef923530c1f490ed6382a4e3c9f49e7be7a3cec849bf1bfc30f],Size_:125497816,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,RepoTags
:[],RepoDigests:[registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280],Size_:54632579,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:8cfc3f994a82b92969bf5521603a7f2815cc9a84857b3a888402e19a37423c4b,RepoTags:[],RepoDigests:[nvcr.io/nvidia/k8s-device-plugin@sha256:0153ba5eac2182064434f0101acce97ef512df59a32e1fbbdef12ca75c514e1e nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1],Size_:303559878,Uid:nil,Username:,Spec:nil,},&Image{Id:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,RepoTags:[],RepoDigests:[registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80],S
ize_:55070573,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,},&Image{Id:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,RepoTags:[],RepoDigests:[docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef docker.io/rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246],Size_:35264960,Uid:nil,Username:,Spec:nil,},&Image{Id:d2fd211e7dcaaecc12a1c76088a88d83bd00bf716be19cef173392b68c5a3653,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5 gcr.io/k8s-minikube/kube-registry-proxy@sha256:f107ecd58728a2df5f2bb7e087f65f5363d0019b1e1fd476e4ef16065f44abfb],Size_:146566649,Uid:nil,Username:,Spec:nil,},&Image{Id:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,RepoTags:[],RepoDigests:[ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f],Size_:88649672,Uid:&Int64
Value{Value:65534,},Username:,Spec:nil,},&Image{Id:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c],Size_:21521620,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5],Size_:37200280,Uid:nil,Username:,Spec:nil,},&Image{Id:311f90a3747fd333f687bc8ea3a1bdaa7f19aec377adedcefa818d241ee514f1,RepoTags:[],RepoDigests:[registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5
c5424c021b2af085d75 registry.k8s.io/ingress-nginx/controller@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e],Size_:256568209,Uid:nil,Username:www-data,Spec:nil,},&Image{Id:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0],Size_:19577497,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:6d2a98b274382ca188ce121413dcafda936b250500089a622c3f2ce821ab9a69,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf],Size_:49800034,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,},&Image{Id:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014
c2c76f9326992,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8],Size_:60675705,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5],Size_:57410185,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,RepoTags:[],RepoDigests:[docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79
],Size_:4497096,Uid:nil,Username:,Spec:nil,},&Image{Id:9211bbaa0dbd68fed073065eb9f0a6ed00a75090a9235eca2554c62d1e75c58f,RepoTags:[docker.io/library/busybox:stable],RepoDigests:[docker.io/library/busybox@sha256:ba76950ac9eaa407512c9d859cea48114eeff8a6f12ebaa5d32ce79d4a017dd8 docker.io/library/busybox@sha256:cca7bbfb3cd4dc1022f00cee78c51aa46ecc3141188f0dd520978a620697e7ad],Size_:4504102,Uid:nil,Username:,Spec:nil,},&Image{Id:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,RepoTags:[gcr.io/k8s-minikube/busybox:latest],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b],Size_:1462480,Uid:nil,Username:,Spec:nil,},&Image{Id:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,RepoTags:[],RepoDigests:[ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67 ghcr.io/headlamp-k8s/headla
mp@sha256:f6e7ee1448cf93788f6991de87868408809a10690bfd3ef61e96318e66924e57],Size_:227053386,Uid:nil,Username:headlamp,Spec:nil,},&Image{Id:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,RepoTags:[docker.io/alpine/helm:2.16.3],RepoDigests:[docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f],Size_:47148757,Uid:nil,Username:,Spec:nil,},&Image{Id:529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff,RepoTags:[docker.io/library/nginx:alpine],RepoDigests:[docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686 docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59],Size_:44405005,Uid:nil,Username:,Spec:nil,},&Image{Id:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c docker.io/library/nginx@sha256:4c0f
daa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac],Size_:190867606,Uid:nil,Username:,Spec:nil,},&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=f4256fc2-d4bb-47c8-b7f6-619cdce39e1f name=/runtime.v1.ImageService/ListImages
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.482611272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4d4d35f6-ce67-41ef-97ae-0616389739ce name=/runtime.v1.RuntimeService/Version
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.482667671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4d4d35f6-ce67-41ef-97ae-0616389739ce name=/runtime.v1.RuntimeService/Version
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.483996009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0b90d733-fb45-4d61-96db-cf3177c8e84f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.485433730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705311140485412031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=0b90d733-fb45-4d61-96db-cf3177c8e84f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.486131193Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3434e25a-5e2c-4508-8ff9-537cc92bc24c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.486224068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3434e25a-5e2c-4508-8ff9-537cc92bc24c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.486867734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4c7cf5d02a7d803e4a210348b336bc0ff359f3d079d7ee5c0371c76790a6272,PodSandboxId:a4a643f3c7731d6c87b892a3f3815784d6b0c32d9a32560b2be02c956de8d60e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705311132044898337,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-khfkn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f79e11d-1954-480e-8794-b4156b7da9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 31dcfdb0,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3dae7135f87eabf1d081c9eae1686ebeb1b8b827d3f5268d0c4af775689a1e,PodSandboxId:8e96f70fae807ae327d6558e97f79ecaead873fa1ca9bd1010e5d9e2236d79fd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705310994102296897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 359ec7f6-2e6a-453f-9838-5987a456e10a,},Annotations:map[string]string{io.kubernet
es.container.hash: 65e7a5cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32aee6ee030fda4fe32e74603ed7cf21345c440179dcf7330b664a487ab595c,PodSandboxId:7207340c660ac554b960bb47ca1672f5c8454aa3fde9c205956c7b70ef411805,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705310984000371720,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-d6kzs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 5e5178d2-4c98-44f0-8e54-80b2e8dd906b,},Annotations:map[string]string{io.kubernetes.container.hash: 95eb8335,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbd23d0fdb6e62c5d70db098fb6d1b7fe777e8888206c22bb299c984906014d,PodSandboxId:8a0945f66b1dc0e45a650a798f66fb17df7bf5629657bf13923eb55fb1de0115,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705310957289226067,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-glww9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 852c86b8-b0f4-4a3d-bb0f-9f47b141171f,},Annotations:map[string]string{io.kubernetes.container.hash: 767d3860,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c03728c10559bbfd6d43cff8c3820dae4796f4c89dc5f43f323410f8fc8b558,PodSandboxId:45c03c08f350ad6976431efa9785367eef067f64d230dbc194189cfb875a1874,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705310938556558394,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-n5wzg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c54ad65-0db6-443e-9f6d-0839e1461ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2902bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2912ce2ccd454b4b80675f34a841ed212afc2a5fc9e522ae1b89933ab9ebfffd,PodSandboxId:287bd8ea2568c11bbea6db270d1821c4601bcadb670644c7a3ce01e1319f8ecc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@s
ha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1705310933403134535,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-cblmm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 036bdc41-f865-4951-b160-19d14c9ded61,},Annotations:map[string]string{io.kubernetes.container.hash: 3d27b747,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:010dfd36ed70286e6a2e0ca0dae1ed3fb4f720877b538812b8765f75118cf4d4,PodSandboxId:6f9d5320f8eec62d436dfd98cd3d604d9ffce3dd358f1a91e639f1396358c14a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotation
s:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705310931101251281,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8c4vv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 750d39e0-3567-4fe2-9928-68c07e8fa5be,},Annotations:map[string]string{io.kubernetes.container.hash: b28920eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41b19ebcd8c96bc22d14bd72448146cfc1f8ce94c3a76b346a7122d4b7c878f,PodSandboxId:5cc9710f6a9d2bc3746ac58f2b79d5772210ac1dd944551727f70a64d4897828,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302
a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705310890705290298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62eab5d1-282c-40fe-9832-94f244accb57,},Annotations:map[string]string{io.kubernetes.container.hash: 33f66688,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c71dcfdeddbe331c25e60e10b1c7bba44526bb8c127b25e7812f9e47ded2e8f,PodSandboxId:c806b6c9e35c982af92eaead436cbf8ec8aad36fed7f57b35adad52c364e8324,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705310890830634949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-hrthp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4f25ae89-d986-4bdd-8b8b-dd221b88488d,},Annotations:map[string]string{io.kubernetes.container.hash: 87188f12,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801a162fbfeef5ca367b6cafc1ec78e4d56df959258c890e163dcec6f5966107,PodSandboxId:fd685ec5f628af725b8ec28da1f9887811c174d1376635a6302de275b0076aef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705310884021151280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjm66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0acdb-a612-4262-b2f8-294bf277ce7c,},Annotations:map[string]string{io.kubernetes.container.hash: c7c1406e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60daef6657f25392ede6649efb4ef4291139f5e5ed2ee5174a772ec4d3c62805,PodSandboxId:928890cd0f5d8636ba288c62c66831162c9516ea044ec49088c94e6f529bdf49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89
fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705310873675892438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrnpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e76c1d-5104-4438-adf0-c981082259f0,},Annotations:map[string]string{io.kubernetes.container.hash: cae5a361,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe02d360c62e9bdb1c3f43da45ea5216d
f776faa9d2603c14d957781e668d63,PodSandboxId:6969810956bb3725ed061445f5456d8a717fcf09b9d902433793bdfffb4b635b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705310850391128875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 014719c50eca0af9ef3a2d3f3bf2a3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 96d2f52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e930744462f33f2f36d1f392adb6f5d669a4a61d7b234729ec46a6452dcba652,PodSandboxId:dc08e5ab84e
bfaa61d69d057bc341fd364ae00f69bef0fbe9c3414fc54d173d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705310850155991005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960f308a228c7f6439a86cc2bad6aa88,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0139bafe1eb994aea61c447d848ace50fdbb560eab289da5f953135b9c1016c,PodSandboxId:a8826c45d1000281457e81f0cf
9702b0cf0f823c71ae682738f40ab2e4b7f73c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705310850109314978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a603cca63a2c6cb21ced63f0de06f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa375d9d387eb3de3496bc2c401670e647afacf8d674239200b1fd676716c8e,PodSandboxId:93e1b
21de31a4a23f24dc312251ea315eac86430331908c0125d77265d3650ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705310850004179298,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da70a7d2051a73f61d2a079122c98366,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3434e25a-5e2c-4508-8ff9-537cc92bc24c name=/runtime.v1.RuntimeServi
ce/ListContainers
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.523441238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6abb10cd-7ab2-4819-b713-0a376998d9e0 name=/runtime.v1.RuntimeService/Version
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.523507614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6abb10cd-7ab2-4819-b713-0a376998d9e0 name=/runtime.v1.RuntimeService/Version
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.524970374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8f79cfaf-036b-48bb-91a7-706b514c6d27 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.530688294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705311140530669944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=8f79cfaf-036b-48bb-91a7-706b514c6d27 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.532248443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a756ae8-fb30-4017-b5e7-7ac2ac4793d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.532354696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a756ae8-fb30-4017-b5e7-7ac2ac4793d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.533016826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4c7cf5d02a7d803e4a210348b336bc0ff359f3d079d7ee5c0371c76790a6272,PodSandboxId:a4a643f3c7731d6c87b892a3f3815784d6b0c32d9a32560b2be02c956de8d60e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705311132044898337,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-khfkn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f79e11d-1954-480e-8794-b4156b7da9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 31dcfdb0,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3dae7135f87eabf1d081c9eae1686ebeb1b8b827d3f5268d0c4af775689a1e,PodSandboxId:8e96f70fae807ae327d6558e97f79ecaead873fa1ca9bd1010e5d9e2236d79fd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705310994102296897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 359ec7f6-2e6a-453f-9838-5987a456e10a,},Annotations:map[string]string{io.kubernet
es.container.hash: 65e7a5cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32aee6ee030fda4fe32e74603ed7cf21345c440179dcf7330b664a487ab595c,PodSandboxId:7207340c660ac554b960bb47ca1672f5c8454aa3fde9c205956c7b70ef411805,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705310984000371720,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-d6kzs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 5e5178d2-4c98-44f0-8e54-80b2e8dd906b,},Annotations:map[string]string{io.kubernetes.container.hash: 95eb8335,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbd23d0fdb6e62c5d70db098fb6d1b7fe777e8888206c22bb299c984906014d,PodSandboxId:8a0945f66b1dc0e45a650a798f66fb17df7bf5629657bf13923eb55fb1de0115,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705310957289226067,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-glww9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 852c86b8-b0f4-4a3d-bb0f-9f47b141171f,},Annotations:map[string]string{io.kubernetes.container.hash: 767d3860,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c03728c10559bbfd6d43cff8c3820dae4796f4c89dc5f43f323410f8fc8b558,PodSandboxId:45c03c08f350ad6976431efa9785367eef067f64d230dbc194189cfb875a1874,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705310938556558394,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-n5wzg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c54ad65-0db6-443e-9f6d-0839e1461ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2902bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2912ce2ccd454b4b80675f34a841ed212afc2a5fc9e522ae1b89933ab9ebfffd,PodSandboxId:287bd8ea2568c11bbea6db270d1821c4601bcadb670644c7a3ce01e1319f8ecc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@s
ha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1705310933403134535,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-cblmm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 036bdc41-f865-4951-b160-19d14c9ded61,},Annotations:map[string]string{io.kubernetes.container.hash: 3d27b747,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:010dfd36ed70286e6a2e0ca0dae1ed3fb4f720877b538812b8765f75118cf4d4,PodSandboxId:6f9d5320f8eec62d436dfd98cd3d604d9ffce3dd358f1a91e639f1396358c14a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotation
s:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705310931101251281,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8c4vv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 750d39e0-3567-4fe2-9928-68c07e8fa5be,},Annotations:map[string]string{io.kubernetes.container.hash: b28920eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41b19ebcd8c96bc22d14bd72448146cfc1f8ce94c3a76b346a7122d4b7c878f,PodSandboxId:5cc9710f6a9d2bc3746ac58f2b79d5772210ac1dd944551727f70a64d4897828,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302
a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705310890705290298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62eab5d1-282c-40fe-9832-94f244accb57,},Annotations:map[string]string{io.kubernetes.container.hash: 33f66688,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c71dcfdeddbe331c25e60e10b1c7bba44526bb8c127b25e7812f9e47ded2e8f,PodSandboxId:c806b6c9e35c982af92eaead436cbf8ec8aad36fed7f57b35adad52c364e8324,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705310890830634949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-hrthp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4f25ae89-d986-4bdd-8b8b-dd221b88488d,},Annotations:map[string]string{io.kubernetes.container.hash: 87188f12,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801a162fbfeef5ca367b6cafc1ec78e4d56df959258c890e163dcec6f5966107,PodSandboxId:fd685ec5f628af725b8ec28da1f9887811c174d1376635a6302de275b0076aef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705310884021151280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjm66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0acdb-a612-4262-b2f8-294bf277ce7c,},Annotations:map[string]string{io.kubernetes.container.hash: c7c1406e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60daef6657f25392ede6649efb4ef4291139f5e5ed2ee5174a772ec4d3c62805,PodSandboxId:928890cd0f5d8636ba288c62c66831162c9516ea044ec49088c94e6f529bdf49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89
fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705310873675892438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrnpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e76c1d-5104-4438-adf0-c981082259f0,},Annotations:map[string]string{io.kubernetes.container.hash: cae5a361,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe02d360c62e9bdb1c3f43da45ea5216d
f776faa9d2603c14d957781e668d63,PodSandboxId:6969810956bb3725ed061445f5456d8a717fcf09b9d902433793bdfffb4b635b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705310850391128875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 014719c50eca0af9ef3a2d3f3bf2a3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 96d2f52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e930744462f33f2f36d1f392adb6f5d669a4a61d7b234729ec46a6452dcba652,PodSandboxId:dc08e5ab84e
bfaa61d69d057bc341fd364ae00f69bef0fbe9c3414fc54d173d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705310850155991005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960f308a228c7f6439a86cc2bad6aa88,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0139bafe1eb994aea61c447d848ace50fdbb560eab289da5f953135b9c1016c,PodSandboxId:a8826c45d1000281457e81f0cf
9702b0cf0f823c71ae682738f40ab2e4b7f73c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705310850109314978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a603cca63a2c6cb21ced63f0de06f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa375d9d387eb3de3496bc2c401670e647afacf8d674239200b1fd676716c8e,PodSandboxId:93e1b
21de31a4a23f24dc312251ea315eac86430331908c0125d77265d3650ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705310850004179298,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da70a7d2051a73f61d2a079122c98366,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a756ae8-fb30-4017-b5e7-7ac2ac4793d2 name=/runtime.v1.RuntimeServi
ce/ListContainers
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.573844411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d118aa9e-0271-48f5-aa63-143767ada08b name=/runtime.v1.RuntimeService/Version
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.573897634Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d118aa9e-0271-48f5-aa63-143767ada08b name=/runtime.v1.RuntimeService/Version
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.575314862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5e3d2c4b-c1da-40b1-81a1-6db04e1966eb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.576597661Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705311140576580913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=5e3d2c4b-c1da-40b1-81a1-6db04e1966eb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.577371306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=33682d2f-52c4-4c32-9ef1-7b116d299b13 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.577427776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=33682d2f-52c4-4c32-9ef1-7b116d299b13 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:32:20 addons-732359 crio[713]: time="2024-01-15 09:32:20.577780501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4c7cf5d02a7d803e4a210348b336bc0ff359f3d079d7ee5c0371c76790a6272,PodSandboxId:a4a643f3c7731d6c87b892a3f3815784d6b0c32d9a32560b2be02c956de8d60e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705311132044898337,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-khfkn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f79e11d-1954-480e-8794-b4156b7da9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 31dcfdb0,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3dae7135f87eabf1d081c9eae1686ebeb1b8b827d3f5268d0c4af775689a1e,PodSandboxId:8e96f70fae807ae327d6558e97f79ecaead873fa1ca9bd1010e5d9e2236d79fd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705310994102296897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 359ec7f6-2e6a-453f-9838-5987a456e10a,},Annotations:map[string]string{io.kubernet
es.container.hash: 65e7a5cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32aee6ee030fda4fe32e74603ed7cf21345c440179dcf7330b664a487ab595c,PodSandboxId:7207340c660ac554b960bb47ca1672f5c8454aa3fde9c205956c7b70ef411805,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705310984000371720,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-d6kzs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 5e5178d2-4c98-44f0-8e54-80b2e8dd906b,},Annotations:map[string]string{io.kubernetes.container.hash: 95eb8335,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbd23d0fdb6e62c5d70db098fb6d1b7fe777e8888206c22bb299c984906014d,PodSandboxId:8a0945f66b1dc0e45a650a798f66fb17df7bf5629657bf13923eb55fb1de0115,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705310957289226067,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-glww9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 852c86b8-b0f4-4a3d-bb0f-9f47b141171f,},Annotations:map[string]string{io.kubernetes.container.hash: 767d3860,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c03728c10559bbfd6d43cff8c3820dae4796f4c89dc5f43f323410f8fc8b558,PodSandboxId:45c03c08f350ad6976431efa9785367eef067f64d230dbc194189cfb875a1874,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705310938556558394,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-n5wzg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c54ad65-0db6-443e-9f6d-0839e1461ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2902bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2912ce2ccd454b4b80675f34a841ed212afc2a5fc9e522ae1b89933ab9ebfffd,PodSandboxId:287bd8ea2568c11bbea6db270d1821c4601bcadb670644c7a3ce01e1319f8ecc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@s
ha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1705310933403134535,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-cblmm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 036bdc41-f865-4951-b160-19d14c9ded61,},Annotations:map[string]string{io.kubernetes.container.hash: 3d27b747,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:010dfd36ed70286e6a2e0ca0dae1ed3fb4f720877b538812b8765f75118cf4d4,PodSandboxId:6f9d5320f8eec62d436dfd98cd3d604d9ffce3dd358f1a91e639f1396358c14a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotation
s:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705310931101251281,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8c4vv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 750d39e0-3567-4fe2-9928-68c07e8fa5be,},Annotations:map[string]string{io.kubernetes.container.hash: b28920eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41b19ebcd8c96bc22d14bd72448146cfc1f8ce94c3a76b346a7122d4b7c878f,PodSandboxId:5cc9710f6a9d2bc3746ac58f2b79d5772210ac1dd944551727f70a64d4897828,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302
a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705310890705290298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62eab5d1-282c-40fe-9832-94f244accb57,},Annotations:map[string]string{io.kubernetes.container.hash: 33f66688,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c71dcfdeddbe331c25e60e10b1c7bba44526bb8c127b25e7812f9e47ded2e8f,PodSandboxId:c806b6c9e35c982af92eaead436cbf8ec8aad36fed7f57b35adad52c364e8324,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705310890830634949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-hrthp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4f25ae89-d986-4bdd-8b8b-dd221b88488d,},Annotations:map[string]string{io.kubernetes.container.hash: 87188f12,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801a162fbfeef5ca367b6cafc1ec78e4d56df959258c890e163dcec6f5966107,PodSandboxId:fd685ec5f628af725b8ec28da1f9887811c174d1376635a6302de275b0076aef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705310884021151280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjm66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0acdb-a612-4262-b2f8-294bf277ce7c,},Annotations:map[string]string{io.kubernetes.container.hash: c7c1406e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60daef6657f25392ede6649efb4ef4291139f5e5ed2ee5174a772ec4d3c62805,PodSandboxId:928890cd0f5d8636ba288c62c66831162c9516ea044ec49088c94e6f529bdf49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89
fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705310873675892438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrnpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e76c1d-5104-4438-adf0-c981082259f0,},Annotations:map[string]string{io.kubernetes.container.hash: cae5a361,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe02d360c62e9bdb1c3f43da45ea5216d
f776faa9d2603c14d957781e668d63,PodSandboxId:6969810956bb3725ed061445f5456d8a717fcf09b9d902433793bdfffb4b635b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705310850391128875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 014719c50eca0af9ef3a2d3f3bf2a3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 96d2f52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e930744462f33f2f36d1f392adb6f5d669a4a61d7b234729ec46a6452dcba652,PodSandboxId:dc08e5ab84e
bfaa61d69d057bc341fd364ae00f69bef0fbe9c3414fc54d173d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705310850155991005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960f308a228c7f6439a86cc2bad6aa88,},Annotations:map[string]string{io.kubernetes.container.hash: b4e5abb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0139bafe1eb994aea61c447d848ace50fdbb560eab289da5f953135b9c1016c,PodSandboxId:a8826c45d1000281457e81f0cf
9702b0cf0f823c71ae682738f40ab2e4b7f73c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705310850109314978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a603cca63a2c6cb21ced63f0de06f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa375d9d387eb3de3496bc2c401670e647afacf8d674239200b1fd676716c8e,PodSandboxId:93e1b
21de31a4a23f24dc312251ea315eac86430331908c0125d77265d3650ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705310850004179298,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-732359,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da70a7d2051a73f61d2a079122c98366,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=33682d2f-52c4-4c32-9ef1-7b116d299b13 name=/runtime.v1.RuntimeServi
ce/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4c7cf5d02a7d       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   a4a643f3c7731       hello-world-app-5d77478584-khfkn
	4c3dae7135f87       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   8e96f70fae807       nginx
	b32aee6ee030f       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   7207340c660ac       headlamp-7ddfbb94ff-d6kzs
	ffbd23d0fdb6e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   8a0945f66b1dc       gcp-auth-d4c87556c-glww9
	4c03728c10559       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   45c03c08f350a       ingress-nginx-admission-patch-n5wzg
	2912ce2ccd454       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   287bd8ea2568c       local-path-provisioner-78b46b4d5c-cblmm
	010dfd36ed702       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   6f9d5320f8eec       ingress-nginx-admission-create-8c4vv
	8c71dcfdeddbe       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   c806b6c9e35c9       yakd-dashboard-9947fc6bf-hrthp
	f41b19ebcd8c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   5cc9710f6a9d2       storage-provisioner
	801a162fbfeef       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   fd685ec5f628a       kube-proxy-hjm66
	60daef6657f25       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   928890cd0f5d8       coredns-5dd5756b68-vrnpk
	bbe02d360c62e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   6969810956bb3       etcd-addons-732359
	e930744462f33       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   dc08e5ab84ebf       kube-apiserver-addons-732359
	e0139bafe1eb9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   a8826c45d1000       kube-controller-manager-addons-732359
	3fa375d9d387e       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   93e1b21de31a4       kube-scheduler-addons-732359
	
	
	==> coredns [60daef6657f25392ede6649efb4ef4291139f5e5ed2ee5174a772ec4d3c62805] <==
	[INFO] 10.244.0.8:39546 - 28880 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000146159s
	[INFO] 10.244.0.8:49989 - 59040 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128862s
	[INFO] 10.244.0.8:49989 - 24482 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006699s
	[INFO] 10.244.0.8:43541 - 59804 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070889s
	[INFO] 10.244.0.8:43541 - 57241 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059956s
	[INFO] 10.244.0.8:40783 - 2880 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00039218s
	[INFO] 10.244.0.8:40783 - 14662 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071948s
	[INFO] 10.244.0.8:50722 - 36093 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000060642s
	[INFO] 10.244.0.8:50722 - 21241 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038189s
	[INFO] 10.244.0.8:57555 - 40674 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000032538s
	[INFO] 10.244.0.8:57555 - 26138 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000032552s
	[INFO] 10.244.0.8:39858 - 38890 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000044969s
	[INFO] 10.244.0.8:39858 - 39140 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003601s
	[INFO] 10.244.0.8:49585 - 48686 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00004042s
	[INFO] 10.244.0.8:49585 - 13359 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100465s
	[INFO] 10.244.0.21:57264 - 42787 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000230574s
	[INFO] 10.244.0.21:58062 - 26357 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000096713s
	[INFO] 10.244.0.21:55642 - 57097 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081681s
	[INFO] 10.244.0.21:47156 - 41709 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000066482s
	[INFO] 10.244.0.21:57176 - 11787 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070408s
	[INFO] 10.244.0.21:49233 - 52844 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000056392s
	[INFO] 10.244.0.21:56671 - 23000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000804735s
	[INFO] 10.244.0.21:47472 - 41095 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000439027s
	[INFO] 10.244.0.24:37713 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000253538s
	[INFO] 10.244.0.24:44755 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000097129s
	
	
	==> describe nodes <==
	Name:               addons-732359
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-732359
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=addons-732359
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T09_27_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-732359
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 09:27:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-732359
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 09:32:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 09:30:11 +0000   Mon, 15 Jan 2024 09:27:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 09:30:11 +0000   Mon, 15 Jan 2024 09:27:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 09:30:11 +0000   Mon, 15 Jan 2024 09:27:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 09:30:11 +0000   Mon, 15 Jan 2024 09:27:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.21
	  Hostname:    addons-732359
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d0158384d1b407c88b103d5beeae722
	  System UUID:                3d015838-4d1b-407c-88b1-03d5beeae722
	  Boot ID:                    db9364e3-3f48-4786-8e84-01e4a5f179f8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-khfkn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-glww9                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  headlamp                    headlamp-7ddfbb94ff-d6kzs                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-5dd5756b68-vrnpk                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m30s
	  kube-system                 etcd-addons-732359                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-apiserver-addons-732359               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-controller-manager-addons-732359      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-proxy-hjm66                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-scheduler-addons-732359               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  local-path-storage          local-path-provisioner-78b46b4d5c-cblmm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-hrthp             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m12s  kube-proxy       
	  Normal  Starting                 4m43s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s  kubelet          Node addons-732359 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s  kubelet          Node addons-732359 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s  kubelet          Node addons-732359 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m42s  kubelet          Node addons-732359 status is now: NodeReady
	  Normal  RegisteredNode           4m31s  node-controller  Node addons-732359 event: Registered Node addons-732359 in Controller
	
	
	==> dmesg <==
	[  +3.497255] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148897] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.037786] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.058589] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.117504] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.141447] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.106663] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.196754] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[  +9.550440] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[  +8.755990] systemd-fstab-generator[1240]: Ignoring "noauto" for root device
	[ +20.308550] kauditd_printk_skb: 5 callbacks suppressed
	[Jan15 09:28] kauditd_printk_skb: 59 callbacks suppressed
	[ +17.252983] kauditd_printk_skb: 16 callbacks suppressed
	[  +9.300233] kauditd_printk_skb: 18 callbacks suppressed
	[Jan15 09:29] kauditd_printk_skb: 40 callbacks suppressed
	[ +21.679812] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.238168] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.359476] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.753668] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.048649] kauditd_printk_skb: 14 callbacks suppressed
	[Jan15 09:30] kauditd_printk_skb: 12 callbacks suppressed
	[Jan15 09:32] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [bbe02d360c62e9bdb1c3f43da45ea5216df776faa9d2603c14d957781e668d63] <==
	{"level":"warn","ts":"2024-01-15T09:28:47.53444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.854539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82264"}
	{"level":"info","ts":"2024-01-15T09:28:47.53452Z","caller":"traceutil/trace.go:171","msg":"trace[1493426495] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:970; }","duration":"198.946214ms","start":"2024-01-15T09:28:47.335564Z","end":"2024-01-15T09:28:47.53451Z","steps":["trace[1493426495] 'agreement among raft nodes before linearized reading'  (duration: 198.735639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:28:47.534673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.205565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13485"}
	{"level":"info","ts":"2024-01-15T09:28:47.534734Z","caller":"traceutil/trace.go:171","msg":"trace[425427091] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:970; }","duration":"199.262238ms","start":"2024-01-15T09:28:47.335462Z","end":"2024-01-15T09:28:47.534724Z","steps":["trace[425427091] 'agreement among raft nodes before linearized reading'  (duration: 199.184797ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:28:47.534883Z","caller":"traceutil/trace.go:171","msg":"trace[1008823293] transaction","detail":"{read_only:false; response_revision:970; number_of_response:1; }","duration":"281.337446ms","start":"2024-01-15T09:28:47.253539Z","end":"2024-01-15T09:28:47.534876Z","steps":["trace[1008823293] 'process raft request'  (duration: 280.28072ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:28:47.534454Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.767095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82264"}
	{"level":"info","ts":"2024-01-15T09:28:47.535026Z","caller":"traceutil/trace.go:171","msg":"trace[1110275464] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:970; }","duration":"157.348688ms","start":"2024-01-15T09:28:47.377672Z","end":"2024-01-15T09:28:47.535021Z","steps":["trace[1110275464] 'agreement among raft nodes before linearized reading'  (duration: 156.652548ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:28:48.951463Z","caller":"traceutil/trace.go:171","msg":"trace[597222247] transaction","detail":"{read_only:false; response_revision:971; number_of_response:1; }","duration":"104.919246ms","start":"2024-01-15T09:28:48.846528Z","end":"2024-01-15T09:28:48.951447Z","steps":["trace[597222247] 'process raft request'  (duration: 104.334939ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:29:03.903867Z","caller":"traceutil/trace.go:171","msg":"trace[1720830728] transaction","detail":"{read_only:false; response_revision:1067; number_of_response:1; }","duration":"193.129823ms","start":"2024-01-15T09:29:03.710642Z","end":"2024-01-15T09:29:03.903772Z","steps":["trace[1720830728] 'process raft request'  (duration: 192.912037ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:29:40.534271Z","caller":"traceutil/trace.go:171","msg":"trace[461425727] transaction","detail":"{read_only:false; response_revision:1338; number_of_response:1; }","duration":"301.781391ms","start":"2024-01-15T09:29:40.232464Z","end":"2024-01-15T09:29:40.534245Z","steps":["trace[461425727] 'process raft request'  (duration: 301.692939ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:29:40.534731Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T09:29:40.232444Z","time spent":"302.085563ms","remote":"127.0.0.1:51128","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":11354,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/addons-732359\" mod_revision:1103 > success:<request_put:<key:\"/registry/minions/addons-732359\" value_size:11315 >> failure:<request_range:<key:\"/registry/minions/addons-732359\" > >"}
	{"level":"info","ts":"2024-01-15T09:29:40.53529Z","caller":"traceutil/trace.go:171","msg":"trace[1248340405] linearizableReadLoop","detail":"{readStateIndex:1382; appliedIndex:1382; }","duration":"202.443041ms","start":"2024-01-15T09:29:40.332838Z","end":"2024-01-15T09:29:40.535281Z","steps":["trace[1248340405] 'read index received'  (duration: 202.440383ms)","trace[1248340405] 'applied index is now lower than readState.Index'  (duration: 2.082µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T09:29:40.547521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.039173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-15T09:29:40.548001Z","caller":"traceutil/trace.go:171","msg":"trace[1866048364] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1339; }","duration":"161.421132ms","start":"2024-01-15T09:29:40.38646Z","end":"2024-01-15T09:29:40.547881Z","steps":["trace[1866048364] 'agreement among raft nodes before linearized reading'  (duration: 152.049534ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:29:40.54941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.503874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-15T09:29:40.549874Z","caller":"traceutil/trace.go:171","msg":"trace[407923250] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1339; }","duration":"217.055444ms","start":"2024-01-15T09:29:40.332808Z","end":"2024-01-15T09:29:40.549863Z","steps":["trace[407923250] 'agreement among raft nodes before linearized reading'  (duration: 205.679669ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:29:59.0906Z","caller":"traceutil/trace.go:171","msg":"trace[1049276659] linearizableReadLoop","detail":"{readStateIndex:1557; appliedIndex:1556; }","duration":"304.389644ms","start":"2024-01-15T09:29:58.786197Z","end":"2024-01-15T09:29:59.090587Z","steps":["trace[1049276659] 'read index received'  (duration: 304.228938ms)","trace[1049276659] 'applied index is now lower than readState.Index'  (duration: 160.248µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-15T09:29:59.090746Z","caller":"traceutil/trace.go:171","msg":"trace[920133305] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"370.576293ms","start":"2024-01-15T09:29:58.720157Z","end":"2024-01-15T09:29:59.090733Z","steps":["trace[920133305] 'process raft request'  (duration: 370.314319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:29:59.090786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.592035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5636"}
	{"level":"warn","ts":"2024-01-15T09:29:59.090858Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T09:29:58.720142Z","time spent":"370.668638ms","remote":"127.0.0.1:51126","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1496 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-15T09:29:59.090882Z","caller":"traceutil/trace.go:171","msg":"trace[1939776112] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1503; }","duration":"304.702393ms","start":"2024-01-15T09:29:58.786173Z","end":"2024-01-15T09:29:59.090875Z","steps":["trace[1939776112] 'agreement among raft nodes before linearized reading'  (duration: 304.557141ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:29:59.091098Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T09:29:58.786161Z","time spent":"304.927583ms","remote":"127.0.0.1:51130","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":5659,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-01-15T09:29:59.091139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.52161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5636"}
	{"level":"info","ts":"2024-01-15T09:29:59.09119Z","caller":"traceutil/trace.go:171","msg":"trace[901202316] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1503; }","duration":"288.581221ms","start":"2024-01-15T09:29:58.802602Z","end":"2024-01-15T09:29:59.091183Z","steps":["trace[901202316] 'agreement among raft nodes before linearized reading'  (duration: 288.491224ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:30:30.704076Z","caller":"traceutil/trace.go:171","msg":"trace[1291554427] transaction","detail":"{read_only:false; response_revision:1598; number_of_response:1; }","duration":"202.73969ms","start":"2024-01-15T09:30:30.501311Z","end":"2024-01-15T09:30:30.704051Z","steps":["trace[1291554427] 'process raft request'  (duration: 202.541343ms)"],"step_count":1}
	
	
	==> gcp-auth [ffbd23d0fdb6e62c5d70db098fb6d1b7fe777e8888206c22bb299c984906014d] <==
	2024/01/15 09:29:17 GCP Auth Webhook started!
	2024/01/15 09:29:21 Ready to marshal response ...
	2024/01/15 09:29:21 Ready to write response ...
	2024/01/15 09:29:21 Ready to marshal response ...
	2024/01/15 09:29:21 Ready to write response ...
	2024/01/15 09:29:31 Ready to marshal response ...
	2024/01/15 09:29:31 Ready to write response ...
	2024/01/15 09:29:33 Ready to marshal response ...
	2024/01/15 09:29:33 Ready to write response ...
	2024/01/15 09:29:36 Ready to marshal response ...
	2024/01/15 09:29:36 Ready to write response ...
	2024/01/15 09:29:36 Ready to marshal response ...
	2024/01/15 09:29:36 Ready to write response ...
	2024/01/15 09:29:36 Ready to marshal response ...
	2024/01/15 09:29:36 Ready to write response ...
	2024/01/15 09:29:41 Ready to marshal response ...
	2024/01/15 09:29:41 Ready to write response ...
	2024/01/15 09:29:49 Ready to marshal response ...
	2024/01/15 09:29:49 Ready to write response ...
	2024/01/15 09:29:50 Ready to marshal response ...
	2024/01/15 09:29:50 Ready to write response ...
	2024/01/15 09:30:23 Ready to marshal response ...
	2024/01/15 09:30:23 Ready to write response ...
	2024/01/15 09:32:09 Ready to marshal response ...
	2024/01/15 09:32:09 Ready to write response ...
	
	
	==> kernel <==
	 09:32:20 up 5 min,  0 users,  load average: 0.76, 1.65, 0.89
	Linux addons-732359 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [e930744462f33f2f36d1f392adb6f5d669a4a61d7b234729ec46a6452dcba652] <==
	I0115 09:29:49.593677       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0115 09:29:49.778064       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.239.146"}
	I0115 09:30:06.870279       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0115 09:30:39.058696       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:39.058855       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:39.068771       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:39.068863       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:39.079194       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0115 09:30:39.087870       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:39.088071       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:39.101843       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:39.101991       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:39.172455       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:39.172565       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:39.178236       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:39.178511       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:39.210996       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:39.211061       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:39.229903       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:39.230803       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0115 09:30:40.163180       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0115 09:30:40.230384       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0115 09:30:40.239274       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0115 09:32:09.664396       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.50.139"}
	E0115 09:32:12.565897       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [e0139bafe1eb994aea61c447d848ace50fdbb560eab289da5f953135b9c1016c] <==
	W0115 09:31:21.872745       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:21.872888       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:31:23.232801       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:23.232864       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:31:40.843695       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:40.843773       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:31:54.534507       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:54.534612       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:32:00.147477       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:32:00.147511       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:32:08.808684       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:32:08.808744       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 09:32:09.408772       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0115 09:32:09.456139       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-khfkn"
	I0115 09:32:09.479265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="72.415698ms"
	I0115 09:32:09.505396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="25.992762ms"
	I0115 09:32:09.506008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="318.668µs"
	I0115 09:32:09.509539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="60.627µs"
	I0115 09:32:12.461772       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0115 09:32:12.465209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="83.38µs"
	I0115 09:32:12.469359       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0115 09:32:12.666087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.04006ms"
	I0115 09:32:12.666201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.746µs"
	W0115 09:32:13.828626       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:32:13.828753       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [801a162fbfeef5ca367b6cafc1ec78e4d56df959258c890e163dcec6f5966107] <==
	I0115 09:28:07.699673       1 server_others.go:69] "Using iptables proxy"
	I0115 09:28:07.799324       1 node.go:141] Successfully retrieved node IP: 192.168.39.21
	I0115 09:28:08.325250       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 09:28:08.325398       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 09:28:08.373564       1 server_others.go:152] "Using iptables Proxier"
	I0115 09:28:08.373683       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 09:28:08.381229       1 server.go:846] "Version info" version="v1.28.4"
	I0115 09:28:08.393731       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 09:28:08.409156       1 config.go:188] "Starting service config controller"
	I0115 09:28:08.409223       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 09:28:08.409269       1 config.go:97] "Starting endpoint slice config controller"
	I0115 09:28:08.409289       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 09:28:08.425870       1 config.go:315] "Starting node config controller"
	I0115 09:28:08.426015       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 09:28:08.555154       1 shared_informer.go:318] Caches are synced for node config
	I0115 09:28:08.512245       1 shared_informer.go:318] Caches are synced for service config
	I0115 09:28:08.612349       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3fa375d9d387eb3de3496bc2c401670e647afacf8d674239200b1fd676716c8e] <==
	W0115 09:27:34.927519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0115 09:27:34.927544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0115 09:27:34.965257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 09:27:34.965337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0115 09:27:34.970609       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 09:27:34.970669       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 09:27:35.006112       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 09:27:35.006193       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0115 09:27:35.039213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 09:27:35.039428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 09:27:35.108845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 09:27:35.108898       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0115 09:27:35.165795       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 09:27:35.165818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0115 09:27:35.167790       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 09:27:35.167948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 09:27:35.326334       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 09:27:35.326383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 09:27:35.340280       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 09:27:35.340330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 09:27:35.347161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 09:27:35.347230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0115 09:27:35.404233       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 09:27:35.404359       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0115 09:27:37.595369       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 09:27:05 UTC, ends at Mon 2024-01-15 09:32:21 UTC. --
	Jan 15 09:32:09 addons-732359 kubelet[1247]: I0115 09:32:09.467564    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="51400031-7eb4-4a57-978d-afd2c6e17305" containerName="liveness-probe"
	Jan 15 09:32:09 addons-732359 kubelet[1247]: I0115 09:32:09.467569    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="1e95a5f6-29da-4137-9765-8305a5c219b5" containerName="task-pv-container"
	Jan 15 09:32:09 addons-732359 kubelet[1247]: I0115 09:32:09.467576    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="51400031-7eb4-4a57-978d-afd2c6e17305" containerName="hostpath"
	Jan 15 09:32:09 addons-732359 kubelet[1247]: I0115 09:32:09.467582    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="51400031-7eb4-4a57-978d-afd2c6e17305" containerName="csi-snapshotter"
	Jan 15 09:32:09 addons-732359 kubelet[1247]: I0115 09:32:09.467587    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="51400031-7eb4-4a57-978d-afd2c6e17305" containerName="csi-provisioner"
	Jan 15 09:32:09 addons-732359 kubelet[1247]: I0115 09:32:09.524494    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67wvs\" (UniqueName: \"kubernetes.io/projected/9f79e11d-1954-480e-8794-b4156b7da9f6-kube-api-access-67wvs\") pod \"hello-world-app-5d77478584-khfkn\" (UID: \"9f79e11d-1954-480e-8794-b4156b7da9f6\") " pod="default/hello-world-app-5d77478584-khfkn"
	Jan 15 09:32:09 addons-732359 kubelet[1247]: I0115 09:32:09.524578    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9f79e11d-1954-480e-8794-b4156b7da9f6-gcp-creds\") pod \"hello-world-app-5d77478584-khfkn\" (UID: \"9f79e11d-1954-480e-8794-b4156b7da9f6\") " pod="default/hello-world-app-5d77478584-khfkn"
	Jan 15 09:32:10 addons-732359 kubelet[1247]: I0115 09:32:10.937141    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnckn\" (UniqueName: \"kubernetes.io/projected/516abb0c-e072-45c0-a45c-88d7fd266a0c-kube-api-access-lnckn\") pod \"516abb0c-e072-45c0-a45c-88d7fd266a0c\" (UID: \"516abb0c-e072-45c0-a45c-88d7fd266a0c\") "
	Jan 15 09:32:10 addons-732359 kubelet[1247]: I0115 09:32:10.942012    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516abb0c-e072-45c0-a45c-88d7fd266a0c-kube-api-access-lnckn" (OuterVolumeSpecName: "kube-api-access-lnckn") pod "516abb0c-e072-45c0-a45c-88d7fd266a0c" (UID: "516abb0c-e072-45c0-a45c-88d7fd266a0c"). InnerVolumeSpecName "kube-api-access-lnckn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 09:32:11 addons-732359 kubelet[1247]: I0115 09:32:11.038480    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lnckn\" (UniqueName: \"kubernetes.io/projected/516abb0c-e072-45c0-a45c-88d7fd266a0c-kube-api-access-lnckn\") on node \"addons-732359\" DevicePath \"\""
	Jan 15 09:32:11 addons-732359 kubelet[1247]: I0115 09:32:11.619233    1247 scope.go:117] "RemoveContainer" containerID="1f29d8d11a1dcf29522373019e01093dfab5f65747917c62ca60018f4e8494a1"
	Jan 15 09:32:11 addons-732359 kubelet[1247]: I0115 09:32:11.671303    1247 scope.go:117] "RemoveContainer" containerID="1f29d8d11a1dcf29522373019e01093dfab5f65747917c62ca60018f4e8494a1"
	Jan 15 09:32:11 addons-732359 kubelet[1247]: E0115 09:32:11.672802    1247 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f29d8d11a1dcf29522373019e01093dfab5f65747917c62ca60018f4e8494a1\": container with ID starting with 1f29d8d11a1dcf29522373019e01093dfab5f65747917c62ca60018f4e8494a1 not found: ID does not exist" containerID="1f29d8d11a1dcf29522373019e01093dfab5f65747917c62ca60018f4e8494a1"
	Jan 15 09:32:11 addons-732359 kubelet[1247]: I0115 09:32:11.672855    1247 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f29d8d11a1dcf29522373019e01093dfab5f65747917c62ca60018f4e8494a1"} err="failed to get container status \"1f29d8d11a1dcf29522373019e01093dfab5f65747917c62ca60018f4e8494a1\": rpc error: code = NotFound desc = could not find container \"1f29d8d11a1dcf29522373019e01093dfab5f65747917c62ca60018f4e8494a1\": container with ID starting with 1f29d8d11a1dcf29522373019e01093dfab5f65747917c62ca60018f4e8494a1 not found: ID does not exist"
	Jan 15 09:32:13 addons-732359 kubelet[1247]: I0115 09:32:13.412129    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2c54ad65-0db6-443e-9f6d-0839e1461ef4" path="/var/lib/kubelet/pods/2c54ad65-0db6-443e-9f6d-0839e1461ef4/volumes"
	Jan 15 09:32:13 addons-732359 kubelet[1247]: I0115 09:32:13.412536    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="516abb0c-e072-45c0-a45c-88d7fd266a0c" path="/var/lib/kubelet/pods/516abb0c-e072-45c0-a45c-88d7fd266a0c/volumes"
	Jan 15 09:32:13 addons-732359 kubelet[1247]: I0115 09:32:13.412886    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="750d39e0-3567-4fe2-9928-68c07e8fa5be" path="/var/lib/kubelet/pods/750d39e0-3567-4fe2-9928-68c07e8fa5be/volumes"
	Jan 15 09:32:15 addons-732359 kubelet[1247]: I0115 09:32:15.782199    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxv5s\" (UniqueName: \"kubernetes.io/projected/aeb85cbb-6efc-412a-82ef-aac5af09b180-kube-api-access-qxv5s\") pod \"aeb85cbb-6efc-412a-82ef-aac5af09b180\" (UID: \"aeb85cbb-6efc-412a-82ef-aac5af09b180\") "
	Jan 15 09:32:15 addons-732359 kubelet[1247]: I0115 09:32:15.782284    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aeb85cbb-6efc-412a-82ef-aac5af09b180-webhook-cert\") pod \"aeb85cbb-6efc-412a-82ef-aac5af09b180\" (UID: \"aeb85cbb-6efc-412a-82ef-aac5af09b180\") "
	Jan 15 09:32:15 addons-732359 kubelet[1247]: I0115 09:32:15.784642    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb85cbb-6efc-412a-82ef-aac5af09b180-kube-api-access-qxv5s" (OuterVolumeSpecName: "kube-api-access-qxv5s") pod "aeb85cbb-6efc-412a-82ef-aac5af09b180" (UID: "aeb85cbb-6efc-412a-82ef-aac5af09b180"). InnerVolumeSpecName "kube-api-access-qxv5s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 09:32:15 addons-732359 kubelet[1247]: I0115 09:32:15.786459    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb85cbb-6efc-412a-82ef-aac5af09b180-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "aeb85cbb-6efc-412a-82ef-aac5af09b180" (UID: "aeb85cbb-6efc-412a-82ef-aac5af09b180"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 09:32:15 addons-732359 kubelet[1247]: I0115 09:32:15.883220    1247 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aeb85cbb-6efc-412a-82ef-aac5af09b180-webhook-cert\") on node \"addons-732359\" DevicePath \"\""
	Jan 15 09:32:15 addons-732359 kubelet[1247]: I0115 09:32:15.883267    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qxv5s\" (UniqueName: \"kubernetes.io/projected/aeb85cbb-6efc-412a-82ef-aac5af09b180-kube-api-access-qxv5s\") on node \"addons-732359\" DevicePath \"\""
	Jan 15 09:32:16 addons-732359 kubelet[1247]: I0115 09:32:16.670674    1247 scope.go:117] "RemoveContainer" containerID="324682aa6e268f4b1740038538e69c9ba7d0d37db370712eb89dd90dfd279a36"
	Jan 15 09:32:17 addons-732359 kubelet[1247]: I0115 09:32:17.412118    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="aeb85cbb-6efc-412a-82ef-aac5af09b180" path="/var/lib/kubelet/pods/aeb85cbb-6efc-412a-82ef-aac5af09b180/volumes"
	
	
	==> storage-provisioner [f41b19ebcd8c96bc22d14bd72448146cfc1f8ce94c3a76b346a7122d4b7c878f] <==
	I0115 09:28:12.088774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 09:28:12.163291       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 09:28:12.163392       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 09:28:12.265141       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 09:28:12.280884       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-732359_70010149-3506-43fa-8936-b0a248267294!
	I0115 09:28:12.281359       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"075ce18c-2703-4c4b-8347-d1f58228dac1", APIVersion:"v1", ResourceVersion:"843", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-732359_70010149-3506-43fa-8936-b0a248267294 became leader
	I0115 09:28:12.587024       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-732359_70010149-3506-43fa-8936-b0a248267294!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-732359 -n addons-732359
helpers_test.go:261: (dbg) Run:  kubectl --context addons-732359 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-732359
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-732359: exit status 82 (2m1.652804524s)

                                                
                                                
-- stdout --
	* Stopping node "addons-732359"  ...
	* Stopping node "addons-732359"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-732359" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-732359
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-732359: exit status 11 (21.564268068s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-732359" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-732359
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-732359: exit status 11 (6.144949077s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-732359" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-732359
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-732359: exit status 11 (6.143108946s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-732359" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (177.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-799339 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-799339 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.871095699s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-799339 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-799339 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [370fe753-c6ef-4033-8f4b-752d94c9c6b6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [370fe753-c6ef-4033-8f4b-752d94c9c6b6] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.00408865s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-799339 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0115 09:42:05.299134   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:44:12.883155   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:12.889227   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:12.899465   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:12.919717   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:12.960049   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:13.040339   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:13.200758   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:13.521348   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:14.162301   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-799339 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.560768936s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-799339 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-799339 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.118
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-799339 addons disable ingress-dns --alsologtostderr -v=1
E0115 09:44:15.442895   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:18.003608   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:44:21.453615   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:44:23.124348   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-799339 addons disable ingress-dns --alsologtostderr -v=1: (13.060673256s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-799339 addons disable ingress --alsologtostderr -v=1
E0115 09:44:33.365325   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-799339 addons disable ingress --alsologtostderr -v=1: (7.557296084s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-799339 -n ingress-addon-legacy-799339
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-799339 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-799339 logs -n 25: (1.105938312s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-302200                                                         | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-302200 image ls                                                | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	| image          | functional-302200 image load --daemon                                     | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-302200                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-302200 image ls                                                | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	| image          | functional-302200 image save                                              | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-302200                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-302200 image rm                                                | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-302200                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-302200 image ls                                                | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	| image          | functional-302200 image load                                              | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-302200 image ls                                                | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	| image          | functional-302200 image save --daemon                                     | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-302200                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-302200                                                         | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-302200                                                         | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-302200                                                         | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-302200                                                         | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-302200 ssh pgrep                                               | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-302200 image build -t                                          | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:40 UTC |
	|                | localhost/my-image:functional-302200                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-302200 image ls                                                | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:40 UTC | 15 Jan 24 09:40 UTC |
	| delete         | -p functional-302200                                                      | functional-302200           | jenkins | v1.32.0 | 15 Jan 24 09:40 UTC | 15 Jan 24 09:40 UTC |
	| start          | -p ingress-addon-legacy-799339                                            | ingress-addon-legacy-799339 | jenkins | v1.32.0 | 15 Jan 24 09:40 UTC | 15 Jan 24 09:41 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-799339                                               | ingress-addon-legacy-799339 | jenkins | v1.32.0 | 15 Jan 24 09:41 UTC | 15 Jan 24 09:41 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-799339                                               | ingress-addon-legacy-799339 | jenkins | v1.32.0 | 15 Jan 24 09:41 UTC | 15 Jan 24 09:41 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-799339                                               | ingress-addon-legacy-799339 | jenkins | v1.32.0 | 15 Jan 24 09:42 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-799339 ip                                            | ingress-addon-legacy-799339 | jenkins | v1.32.0 | 15 Jan 24 09:44 UTC | 15 Jan 24 09:44 UTC |
	| addons         | ingress-addon-legacy-799339                                               | ingress-addon-legacy-799339 | jenkins | v1.32.0 | 15 Jan 24 09:44 UTC | 15 Jan 24 09:44 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-799339                                               | ingress-addon-legacy-799339 | jenkins | v1.32.0 | 15 Jan 24 09:44 UTC | 15 Jan 24 09:44 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:40:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:40:03.146818   22380 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:40:03.146926   22380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:40:03.146935   22380 out.go:309] Setting ErrFile to fd 2...
	I0115 09:40:03.146939   22380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:40:03.147110   22380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 09:40:03.147644   22380 out.go:303] Setting JSON to false
	I0115 09:40:03.148419   22380 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1303,"bootTime":1705310300,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:40:03.148479   22380 start.go:138] virtualization: kvm guest
	I0115 09:40:03.150817   22380 out.go:177] * [ingress-addon-legacy-799339] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:40:03.152240   22380 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:40:03.152243   22380 notify.go:220] Checking for updates...
	I0115 09:40:03.153623   22380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:40:03.155178   22380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:40:03.156711   22380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:40:03.157932   22380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:40:03.159267   22380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:40:03.160582   22380 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:40:03.193551   22380 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 09:40:03.194853   22380 start.go:298] selected driver: kvm2
	I0115 09:40:03.194870   22380 start.go:902] validating driver "kvm2" against <nil>
	I0115 09:40:03.194879   22380 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:40:03.195517   22380 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:40:03.195595   22380 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 09:40:03.209054   22380 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 09:40:03.209111   22380 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:40:03.209340   22380 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 09:40:03.209406   22380 cni.go:84] Creating CNI manager for ""
	I0115 09:40:03.209420   22380 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 09:40:03.209430   22380 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 09:40:03.209442   22380 start_flags.go:321] config:
	{Name:ingress-addon-legacy-799339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-799339 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:40:03.209568   22380 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:40:03.212050   22380 out.go:177] * Starting control plane node ingress-addon-legacy-799339 in cluster ingress-addon-legacy-799339
	I0115 09:40:03.213355   22380 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 09:40:03.241090   22380 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0115 09:40:03.241105   22380 cache.go:56] Caching tarball of preloaded images
	I0115 09:40:03.241211   22380 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 09:40:03.242962   22380 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0115 09:40:03.244403   22380 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:40:03.273449   22380 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0115 09:40:07.037963   22380 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:40:07.038050   22380 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:40:08.015302   22380 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0115 09:40:08.015691   22380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/config.json ...
	I0115 09:40:08.015734   22380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/config.json: {Name:mkc59359bf4f9104c8f41ce1bfb64c3e609341e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:40:08.015927   22380 start.go:365] acquiring machines lock for ingress-addon-legacy-799339: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 09:40:08.015979   22380 start.go:369] acquired machines lock for "ingress-addon-legacy-799339" in 32.072µs
	I0115 09:40:08.016003   22380 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-799339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-799339 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:40:08.016099   22380 start.go:125] createHost starting for "" (driver="kvm2")
	I0115 09:40:08.018471   22380 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0115 09:40:08.018625   22380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:40:08.018670   22380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:40:08.032212   22380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0115 09:40:08.032631   22380 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:40:08.033243   22380 main.go:141] libmachine: Using API Version  1
	I0115 09:40:08.033256   22380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:40:08.033514   22380 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:40:08.033683   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetMachineName
	I0115 09:40:08.033793   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:40:08.033927   22380 start.go:159] libmachine.API.Create for "ingress-addon-legacy-799339" (driver="kvm2")
	I0115 09:40:08.033957   22380 client.go:168] LocalClient.Create starting
	I0115 09:40:08.033990   22380 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem
	I0115 09:40:08.034026   22380 main.go:141] libmachine: Decoding PEM data...
	I0115 09:40:08.034044   22380 main.go:141] libmachine: Parsing certificate...
	I0115 09:40:08.034107   22380 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem
	I0115 09:40:08.034131   22380 main.go:141] libmachine: Decoding PEM data...
	I0115 09:40:08.034147   22380 main.go:141] libmachine: Parsing certificate...
	I0115 09:40:08.034183   22380 main.go:141] libmachine: Running pre-create checks...
	I0115 09:40:08.034197   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .PreCreateCheck
	I0115 09:40:08.034450   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetConfigRaw
	I0115 09:40:08.034824   22380 main.go:141] libmachine: Creating machine...
	I0115 09:40:08.034838   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .Create
	I0115 09:40:08.034932   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Creating KVM machine...
	I0115 09:40:08.036120   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found existing default KVM network
	I0115 09:40:08.036706   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:08.036589   22414 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0115 09:40:08.041726   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | trying to create private KVM network mk-ingress-addon-legacy-799339 192.168.39.0/24...
	I0115 09:40:08.108135   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | private KVM network mk-ingress-addon-legacy-799339 192.168.39.0/24 created
	I0115 09:40:08.108170   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:08.108095   22414 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:40:08.108186   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Setting up store path in /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339 ...
	I0115 09:40:08.108206   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Building disk image from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 09:40:08.108222   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Downloading /home/jenkins/minikube-integration/17953-4821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 09:40:08.305040   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:08.304880   22414 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa...
	I0115 09:40:08.516584   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:08.516432   22414 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/ingress-addon-legacy-799339.rawdisk...
	I0115 09:40:08.516624   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Writing magic tar header
	I0115 09:40:08.516648   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Writing SSH key tar header
	I0115 09:40:08.516662   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:08.516590   22414 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339 ...
	I0115 09:40:08.516750   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339
	I0115 09:40:08.516778   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines
	I0115 09:40:08.516800   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339 (perms=drwx------)
	I0115 09:40:08.516820   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:40:08.516835   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821
	I0115 09:40:08.516846   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 09:40:08.516861   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines (perms=drwxr-xr-x)
	I0115 09:40:08.516875   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Checking permissions on dir: /home/jenkins
	I0115 09:40:08.516890   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube (perms=drwxr-xr-x)
	I0115 09:40:08.516907   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821 (perms=drwxrwxr-x)
	I0115 09:40:08.516922   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 09:40:08.516940   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 09:40:08.516958   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Checking permissions on dir: /home
	I0115 09:40:08.516982   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Creating domain...
	I0115 09:40:08.516997   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Skipping /home - not owner
	I0115 09:40:08.518053   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) define libvirt domain using xml: 
	I0115 09:40:08.518093   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) <domain type='kvm'>
	I0115 09:40:08.518106   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   <name>ingress-addon-legacy-799339</name>
	I0115 09:40:08.518115   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   <memory unit='MiB'>4096</memory>
	I0115 09:40:08.518121   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   <vcpu>2</vcpu>
	I0115 09:40:08.518129   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   <features>
	I0115 09:40:08.518138   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <acpi/>
	I0115 09:40:08.518144   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <apic/>
	I0115 09:40:08.518150   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <pae/>
	I0115 09:40:08.518159   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     
	I0115 09:40:08.518168   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   </features>
	I0115 09:40:08.518174   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   <cpu mode='host-passthrough'>
	I0115 09:40:08.518182   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   
	I0115 09:40:08.518187   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   </cpu>
	I0115 09:40:08.518196   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   <os>
	I0115 09:40:08.518204   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <type>hvm</type>
	I0115 09:40:08.518213   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <boot dev='cdrom'/>
	I0115 09:40:08.518218   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <boot dev='hd'/>
	I0115 09:40:08.518281   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <bootmenu enable='no'/>
	I0115 09:40:08.518312   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   </os>
	I0115 09:40:08.518336   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   <devices>
	I0115 09:40:08.518356   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <disk type='file' device='cdrom'>
	I0115 09:40:08.518375   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/boot2docker.iso'/>
	I0115 09:40:08.518385   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <target dev='hdc' bus='scsi'/>
	I0115 09:40:08.518394   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <readonly/>
	I0115 09:40:08.518402   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     </disk>
	I0115 09:40:08.518409   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <disk type='file' device='disk'>
	I0115 09:40:08.518435   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 09:40:08.518457   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/ingress-addon-legacy-799339.rawdisk'/>
	I0115 09:40:08.518476   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <target dev='hda' bus='virtio'/>
	I0115 09:40:08.518491   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     </disk>
	I0115 09:40:08.518503   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <interface type='network'>
	I0115 09:40:08.518515   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <source network='mk-ingress-addon-legacy-799339'/>
	I0115 09:40:08.518528   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <model type='virtio'/>
	I0115 09:40:08.518541   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     </interface>
	I0115 09:40:08.518558   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <interface type='network'>
	I0115 09:40:08.518572   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <source network='default'/>
	I0115 09:40:08.518581   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <model type='virtio'/>
	I0115 09:40:08.518591   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     </interface>
	I0115 09:40:08.518604   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <serial type='pty'>
	I0115 09:40:08.518616   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <target port='0'/>
	I0115 09:40:08.518637   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     </serial>
	I0115 09:40:08.518651   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <console type='pty'>
	I0115 09:40:08.518664   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <target type='serial' port='0'/>
	I0115 09:40:08.518674   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     </console>
	I0115 09:40:08.518683   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     <rng model='virtio'>
	I0115 09:40:08.518699   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)       <backend model='random'>/dev/random</backend>
	I0115 09:40:08.518716   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     </rng>
	I0115 09:40:08.518729   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     
	I0115 09:40:08.518740   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)     
	I0115 09:40:08.518753   22380 main.go:141] libmachine: (ingress-addon-legacy-799339)   </devices>
	I0115 09:40:08.518761   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) </domain>
	I0115 09:40:08.518775   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) 
	I0115 09:40:08.522836   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:a2:c8:ed in network default
	I0115 09:40:08.523389   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Ensuring networks are active...
	I0115 09:40:08.523404   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:08.524001   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Ensuring network default is active
	I0115 09:40:08.524270   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Ensuring network mk-ingress-addon-legacy-799339 is active
	I0115 09:40:08.524741   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Getting domain xml...
	I0115 09:40:08.525404   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Creating domain...
	I0115 09:40:09.685664   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Waiting to get IP...
	I0115 09:40:09.686562   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:09.686920   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:09.686969   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:09.686909   22414 retry.go:31] will retry after 240.548419ms: waiting for machine to come up
	I0115 09:40:09.929319   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:09.929712   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:09.929740   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:09.929675   22414 retry.go:31] will retry after 380.85796ms: waiting for machine to come up
	I0115 09:40:10.312230   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:10.312658   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:10.312682   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:10.312610   22414 retry.go:31] will retry after 423.618301ms: waiting for machine to come up
	I0115 09:40:10.738210   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:10.738599   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:10.738626   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:10.738550   22414 retry.go:31] will retry after 549.782503ms: waiting for machine to come up
	I0115 09:40:11.290149   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:11.290543   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:11.290568   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:11.290499   22414 retry.go:31] will retry after 609.926767ms: waiting for machine to come up
	I0115 09:40:11.902353   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:11.902870   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:11.902905   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:11.902755   22414 retry.go:31] will retry after 916.173236ms: waiting for machine to come up
	I0115 09:40:12.820803   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:12.821182   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:12.821208   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:12.821144   22414 retry.go:31] will retry after 907.369681ms: waiting for machine to come up
	I0115 09:40:13.730597   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:13.731050   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:13.731073   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:13.731010   22414 retry.go:31] will retry after 1.154394127s: waiting for machine to come up
	I0115 09:40:14.887326   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:14.887732   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:14.887773   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:14.887683   22414 retry.go:31] will retry after 1.388312579s: waiting for machine to come up
	I0115 09:40:16.278082   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:16.278513   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:16.278544   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:16.278457   22414 retry.go:31] will retry after 1.399849526s: waiting for machine to come up
	I0115 09:40:17.680110   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:17.680576   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:17.680610   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:17.680564   22414 retry.go:31] will retry after 2.640616395s: waiting for machine to come up
	I0115 09:40:20.323818   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:20.324276   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:20.324306   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:20.324233   22414 retry.go:31] will retry after 2.440806557s: waiting for machine to come up
	I0115 09:40:22.767398   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:22.767855   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:22.767880   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:22.767808   22414 retry.go:31] will retry after 3.819147419s: waiting for machine to come up
	I0115 09:40:26.591047   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:26.591481   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find current IP address of domain ingress-addon-legacy-799339 in network mk-ingress-addon-legacy-799339
	I0115 09:40:26.591503   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | I0115 09:40:26.591445   22414 retry.go:31] will retry after 4.792265494s: waiting for machine to come up
	I0115 09:40:31.387537   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:31.387924   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Found IP for machine: 192.168.39.118
	I0115 09:40:31.387955   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Reserving static IP address...
	I0115 09:40:31.387975   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has current primary IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:31.388290   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-799339", mac: "52:54:00:94:2d:66", ip: "192.168.39.118"} in network mk-ingress-addon-legacy-799339
	I0115 09:40:31.455661   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Getting to WaitForSSH function...
	I0115 09:40:31.455708   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Reserved static IP address: 192.168.39.118
	I0115 09:40:31.455724   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Waiting for SSH to be available...
	I0115 09:40:31.458370   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:31.458845   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339
	I0115 09:40:31.458875   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-799339 interface with MAC address 52:54:00:94:2d:66
	I0115 09:40:31.458993   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Using SSH client type: external
	I0115 09:40:31.459034   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa (-rw-------)
	I0115 09:40:31.459070   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 09:40:31.459094   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | About to run SSH command:
	I0115 09:40:31.459115   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | exit 0
	I0115 09:40:31.462536   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | SSH cmd err, output: exit status 255: 
	I0115 09:40:31.462565   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0115 09:40:31.462578   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | command : exit 0
	I0115 09:40:31.462589   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | err     : exit status 255
	I0115 09:40:31.462604   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | output  : 
	I0115 09:40:34.463977   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Getting to WaitForSSH function...
	I0115 09:40:34.466122   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.466451   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:34.466485   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.466562   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Using SSH client type: external
	I0115 09:40:34.466606   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa (-rw-------)
	I0115 09:40:34.466653   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 09:40:34.466672   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | About to run SSH command:
	I0115 09:40:34.466686   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | exit 0
	I0115 09:40:34.549779   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | SSH cmd err, output: <nil>: 
	I0115 09:40:34.550004   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) KVM machine creation complete!
	I0115 09:40:34.550364   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetConfigRaw
	I0115 09:40:34.550903   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:40:34.551106   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:40:34.551385   22380 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 09:40:34.551402   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetState
	I0115 09:40:34.552617   22380 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 09:40:34.552631   22380 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 09:40:34.552637   22380 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 09:40:34.552644   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:34.554930   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.555269   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:34.555318   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.555435   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:34.555616   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:34.555751   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:34.555855   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:34.556003   22380 main.go:141] libmachine: Using SSH client type: native
	I0115 09:40:34.556321   22380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0115 09:40:34.556334   22380 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 09:40:34.661412   22380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:40:34.661436   22380 main.go:141] libmachine: Detecting the provisioner...
	I0115 09:40:34.661448   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:34.664019   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.664298   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:34.664332   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.664472   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:34.664640   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:34.664789   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:34.664896   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:34.665051   22380 main.go:141] libmachine: Using SSH client type: native
	I0115 09:40:34.665453   22380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0115 09:40:34.665468   22380 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 09:40:34.774800   22380 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 09:40:34.774872   22380 main.go:141] libmachine: found compatible host: buildroot
	I0115 09:40:34.774886   22380 main.go:141] libmachine: Provisioning with buildroot...
	I0115 09:40:34.774903   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetMachineName
	I0115 09:40:34.775108   22380 buildroot.go:166] provisioning hostname "ingress-addon-legacy-799339"
	I0115 09:40:34.775127   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetMachineName
	I0115 09:40:34.775325   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:34.777688   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.778020   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:34.778056   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.778206   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:34.778426   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:34.778580   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:34.778720   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:34.778843   22380 main.go:141] libmachine: Using SSH client type: native
	I0115 09:40:34.779166   22380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0115 09:40:34.779182   22380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-799339 && echo "ingress-addon-legacy-799339" | sudo tee /etc/hostname
	I0115 09:40:34.898543   22380 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-799339
	
	I0115 09:40:34.898573   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:34.901200   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.901506   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:34.901533   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:34.901680   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:34.901861   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:34.902027   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:34.902160   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:34.902294   22380 main.go:141] libmachine: Using SSH client type: native
	I0115 09:40:34.902643   22380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0115 09:40:34.902662   22380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-799339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-799339/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-799339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 09:40:35.018525   22380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:40:35.018548   22380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 09:40:35.018594   22380 buildroot.go:174] setting up certificates
	I0115 09:40:35.018611   22380 provision.go:83] configureAuth start
	I0115 09:40:35.018626   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetMachineName
	I0115 09:40:35.018855   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetIP
	I0115 09:40:35.021327   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.021633   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:35.021655   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.021774   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:35.023959   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.024245   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:35.024273   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.024366   22380 provision.go:138] copyHostCerts
	I0115 09:40:35.024398   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 09:40:35.024434   22380 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 09:40:35.024444   22380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 09:40:35.024518   22380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 09:40:35.024617   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 09:40:35.024636   22380 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 09:40:35.024643   22380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 09:40:35.024679   22380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 09:40:35.024740   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 09:40:35.024769   22380 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 09:40:35.024775   22380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 09:40:35.024796   22380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 09:40:35.024874   22380 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-799339 san=[192.168.39.118 192.168.39.118 localhost 127.0.0.1 minikube ingress-addon-legacy-799339]
	I0115 09:40:35.306150   22380 provision.go:172] copyRemoteCerts
	I0115 09:40:35.306209   22380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 09:40:35.306236   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:35.308863   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.309144   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:35.309171   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.309350   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:35.309537   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:35.309668   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:35.309788   22380 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa Username:docker}
	I0115 09:40:35.391104   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 09:40:35.391163   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 09:40:35.415843   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 09:40:35.415889   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0115 09:40:35.439859   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 09:40:35.439918   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 09:40:35.463772   22380 provision.go:86] duration metric: configureAuth took 445.148965ms
	I0115 09:40:35.463793   22380 buildroot.go:189] setting minikube options for container-runtime
	I0115 09:40:35.463980   22380 config.go:182] Loaded profile config "ingress-addon-legacy-799339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0115 09:40:35.464146   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:35.466544   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.466888   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:35.466920   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.467021   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:35.467236   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:35.467381   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:35.467536   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:35.467691   22380 main.go:141] libmachine: Using SSH client type: native
	I0115 09:40:35.467994   22380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0115 09:40:35.468016   22380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 09:40:35.758328   22380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 09:40:35.758355   22380 main.go:141] libmachine: Checking connection to Docker...
	I0115 09:40:35.758368   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetURL
	I0115 09:40:35.759535   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Using libvirt version 6000000
	I0115 09:40:35.761459   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.761763   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:35.761791   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.761899   22380 main.go:141] libmachine: Docker is up and running!
	I0115 09:40:35.761919   22380 main.go:141] libmachine: Reticulating splines...
	I0115 09:40:35.761925   22380 client.go:171] LocalClient.Create took 27.727958061s
	I0115 09:40:35.761947   22380 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-799339" took 27.728019966s
	I0115 09:40:35.761961   22380 start.go:300] post-start starting for "ingress-addon-legacy-799339" (driver="kvm2")
	I0115 09:40:35.761988   22380 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 09:40:35.762019   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:40:35.762221   22380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 09:40:35.762243   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:35.764084   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.764341   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:35.764367   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.764457   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:35.764624   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:35.764782   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:35.764916   22380 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa Username:docker}
	I0115 09:40:35.847257   22380 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 09:40:35.851520   22380 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 09:40:35.851544   22380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 09:40:35.851618   22380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 09:40:35.851743   22380 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 09:40:35.851757   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /etc/ssl/certs/134822.pem
	I0115 09:40:35.851917   22380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 09:40:35.859923   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 09:40:35.882109   22380 start.go:303] post-start completed in 120.135656ms
	I0115 09:40:35.882155   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetConfigRaw
	I0115 09:40:35.882686   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetIP
	I0115 09:40:35.885258   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.885556   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:35.885579   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.885807   22380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/config.json ...
	I0115 09:40:35.886010   22380 start.go:128] duration metric: createHost completed in 27.869898081s
	I0115 09:40:35.886033   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:35.888193   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.888512   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:35.888554   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.888688   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:35.888835   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:35.888977   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:35.889070   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:35.889189   22380 main.go:141] libmachine: Using SSH client type: native
	I0115 09:40:35.889498   22380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0115 09:40:35.889510   22380 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 09:40:35.994949   22380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705311635.970714237
	
	I0115 09:40:35.994973   22380 fix.go:206] guest clock: 1705311635.970714237
	I0115 09:40:35.994981   22380 fix.go:219] Guest: 2024-01-15 09:40:35.970714237 +0000 UTC Remote: 2024-01-15 09:40:35.886022743 +0000 UTC m=+32.788301526 (delta=84.691494ms)
	I0115 09:40:35.995015   22380 fix.go:190] guest clock delta is within tolerance: 84.691494ms
	I0115 09:40:35.995019   22380 start.go:83] releasing machines lock for "ingress-addon-legacy-799339", held for 27.979030861s
	I0115 09:40:35.995041   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:40:35.995298   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetIP
	I0115 09:40:35.998239   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.998637   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:35.998676   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:35.998787   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:40:35.999273   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:40:35.999462   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:40:35.999544   22380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 09:40:35.999584   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:35.999700   22380 ssh_runner.go:195] Run: cat /version.json
	I0115 09:40:35.999725   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:40:36.002153   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:36.002389   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:36.002538   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:36.002572   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:36.002694   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:36.002822   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:36.002841   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:36.002848   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:36.003014   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:40:36.003030   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:36.003184   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:40:36.003185   22380 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa Username:docker}
	I0115 09:40:36.003305   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:40:36.003415   22380 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa Username:docker}
	I0115 09:40:36.106289   22380 ssh_runner.go:195] Run: systemctl --version
	I0115 09:40:36.111827   22380 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 09:40:36.274394   22380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 09:40:36.279996   22380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 09:40:36.280080   22380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:40:36.295585   22380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 09:40:36.295608   22380 start.go:475] detecting cgroup driver to use...
	I0115 09:40:36.295671   22380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 09:40:36.309009   22380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 09:40:36.321655   22380 docker.go:217] disabling cri-docker service (if available) ...
	I0115 09:40:36.321708   22380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 09:40:36.334406   22380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 09:40:36.347552   22380 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 09:40:36.457554   22380 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 09:40:36.579188   22380 docker.go:233] disabling docker service ...
	I0115 09:40:36.579258   22380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 09:40:36.593657   22380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 09:40:36.604844   22380 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 09:40:36.720330   22380 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 09:40:36.834026   22380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 09:40:36.845716   22380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 09:40:36.862206   22380 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0115 09:40:36.862269   22380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:40:36.871142   22380 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 09:40:36.871201   22380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:40:36.880011   22380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:40:36.888748   22380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:40:36.897409   22380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 09:40:36.906480   22380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 09:40:36.914213   22380 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 09:40:36.914269   22380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 09:40:36.926446   22380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 09:40:36.934067   22380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 09:40:37.042271   22380 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 09:40:37.210306   22380 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 09:40:37.210388   22380 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 09:40:37.215501   22380 start.go:543] Will wait 60s for crictl version
	I0115 09:40:37.215549   22380 ssh_runner.go:195] Run: which crictl
	I0115 09:40:37.219190   22380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 09:40:37.262341   22380 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 09:40:37.262433   22380 ssh_runner.go:195] Run: crio --version
	I0115 09:40:37.307584   22380 ssh_runner.go:195] Run: crio --version
	I0115 09:40:37.354972   22380 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0115 09:40:37.356709   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetIP
	I0115 09:40:37.359423   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:37.359806   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:40:37.359836   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:40:37.360011   22380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 09:40:37.363863   22380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:40:37.376013   22380 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 09:40:37.376068   22380 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:40:37.411269   22380 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0115 09:40:37.411332   22380 ssh_runner.go:195] Run: which lz4
	I0115 09:40:37.414835   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0115 09:40:37.414918   22380 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 09:40:37.418774   22380 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 09:40:37.418800   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0115 09:40:39.392924   22380 crio.go:444] Took 1.978034 seconds to copy over tarball
	I0115 09:40:39.393003   22380 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 09:40:42.282470   22380 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.889436742s)
	I0115 09:40:42.282499   22380 crio.go:451] Took 2.889553 seconds to extract the tarball
	I0115 09:40:42.282520   22380 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 09:40:42.325757   22380 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:40:42.380141   22380 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0115 09:40:42.380164   22380 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 09:40:42.380213   22380 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:40:42.380251   22380 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:40:42.380265   22380 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0115 09:40:42.380296   22380 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:40:42.380317   22380 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0115 09:40:42.380347   22380 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0115 09:40:42.380267   22380 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:40:42.380234   22380 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:40:42.381417   22380 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:40:42.381456   22380 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0115 09:40:42.381424   22380 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:40:42.381551   22380 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:40:42.381622   22380 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0115 09:40:42.381696   22380 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:40:42.381783   22380 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0115 09:40:42.381834   22380 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:40:42.546687   22380 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0115 09:40:42.585366   22380 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:40:42.586809   22380 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:40:42.590748   22380 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:40:42.592694   22380 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0115 09:40:42.592727   22380 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0115 09:40:42.592760   22380 ssh_runner.go:195] Run: which crictl
	I0115 09:40:42.603596   22380 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:40:42.641970   22380 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0115 09:40:42.645565   22380 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0115 09:40:42.655890   22380 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:40:42.801258   22380 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0115 09:40:42.801297   22380 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:40:42.801348   22380 ssh_runner.go:195] Run: which crictl
	I0115 09:40:42.801349   22380 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0115 09:40:42.801382   22380 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:40:42.801404   22380 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0115 09:40:42.801423   22380 ssh_runner.go:195] Run: which crictl
	I0115 09:40:42.801425   22380 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0115 09:40:42.801456   22380 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0115 09:40:42.801470   22380 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0115 09:40:42.801492   22380 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0115 09:40:42.801512   22380 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0115 09:40:42.801495   22380 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0115 09:40:42.801534   22380 ssh_runner.go:195] Run: which crictl
	I0115 09:40:42.801536   22380 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:40:42.801562   22380 ssh_runner.go:195] Run: which crictl
	I0115 09:40:42.801464   22380 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:40:42.801579   22380 ssh_runner.go:195] Run: which crictl
	I0115 09:40:42.801625   22380 ssh_runner.go:195] Run: which crictl
	I0115 09:40:42.814439   22380 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:40:42.816901   22380 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:40:42.816932   22380 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:40:42.816987   22380 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:40:42.817026   22380 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0115 09:40:42.816997   22380 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0115 09:40:42.910553   22380 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0115 09:40:42.953443   22380 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0115 09:40:42.961757   22380 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0115 09:40:42.962520   22380 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0115 09:40:42.962793   22380 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0115 09:40:42.962845   22380 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0115 09:40:42.962934   22380 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0115 09:40:42.962974   22380 cache_images.go:92] LoadImages completed in 582.800061ms
	W0115 09:40:42.963071   22380 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0115 09:40:42.963140   22380 ssh_runner.go:195] Run: crio config
	I0115 09:40:43.016041   22380 cni.go:84] Creating CNI manager for ""
	I0115 09:40:43.016060   22380 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 09:40:43.016077   22380 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 09:40:43.016102   22380 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-799339 NodeName:ingress-addon-legacy-799339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 09:40:43.016221   22380 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-799339"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 09:40:43.016286   22380 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-799339 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-799339 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 09:40:43.016333   22380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0115 09:40:43.026125   22380 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 09:40:43.026192   22380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 09:40:43.035056   22380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0115 09:40:43.051093   22380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0115 09:40:43.066862   22380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0115 09:40:43.082365   22380 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0115 09:40:43.085858   22380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:40:43.097129   22380 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339 for IP: 192.168.39.118
	I0115 09:40:43.097153   22380 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:40:43.097296   22380 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 09:40:43.097356   22380 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 09:40:43.097413   22380 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.key
	I0115 09:40:43.097430   22380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt with IP's: []
	I0115 09:40:43.215883   22380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt ...
	I0115 09:40:43.287626   22380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: {Name:mk8aece9a33535b655f1f7d005a8e429da4e2a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:40:43.287808   22380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.key ...
	I0115 09:40:43.287862   22380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.key: {Name:mk030ab3a6a208d2769b9cab10d9aa718c0646c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:40:43.287968   22380 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.key.ee260ba9
	I0115 09:40:43.287987   22380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.crt.ee260ba9 with IP's: [192.168.39.118 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 09:40:43.439562   22380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.crt.ee260ba9 ...
	I0115 09:40:43.439589   22380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.crt.ee260ba9: {Name:mkf8eac05656e4bb85c824f41ea4d4021359b97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:40:43.439725   22380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.key.ee260ba9 ...
	I0115 09:40:43.439742   22380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.key.ee260ba9: {Name:mk592ae2e11ad84f90be4131bae5ed77825a2f06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:40:43.439803   22380 certs.go:337] copying /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.crt.ee260ba9 -> /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.crt
	I0115 09:40:43.439863   22380 certs.go:341] copying /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.key.ee260ba9 -> /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.key
	I0115 09:40:43.439915   22380 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.key
	I0115 09:40:43.439928   22380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.crt with IP's: []
	I0115 09:40:43.709536   22380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.crt ...
	I0115 09:40:43.709564   22380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.crt: {Name:mk67354f3828415cb999c0654e6b812dbc9e16e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:40:43.709704   22380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.key ...
	I0115 09:40:43.709723   22380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.key: {Name:mk8972ce5fb4e0cd21ee4997b8385336366a663a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:40:43.709798   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 09:40:43.709816   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 09:40:43.709825   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 09:40:43.709835   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 09:40:43.709847   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 09:40:43.709860   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 09:40:43.709874   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 09:40:43.709895   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 09:40:43.709942   22380 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 09:40:43.709975   22380 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 09:40:43.709984   22380 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 09:40:43.710007   22380 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 09:40:43.710035   22380 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 09:40:43.710053   22380 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 09:40:43.710105   22380 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 09:40:43.710130   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /usr/share/ca-certificates/134822.pem
	I0115 09:40:43.710143   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:40:43.710160   22380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem -> /usr/share/ca-certificates/13482.pem
	I0115 09:40:43.710779   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 09:40:43.736723   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 09:40:43.759280   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 09:40:43.780645   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 09:40:43.802271   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 09:40:43.824205   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 09:40:43.845795   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 09:40:43.867623   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 09:40:43.888938   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 09:40:43.910159   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 09:40:43.931095   22380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 09:40:43.952431   22380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 09:40:43.967705   22380 ssh_runner.go:195] Run: openssl version
	I0115 09:40:43.973118   22380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 09:40:43.982934   22380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 09:40:43.987526   22380 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 09:40:43.987583   22380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 09:40:43.992853   22380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 09:40:44.002902   22380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 09:40:44.012602   22380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:40:44.017040   22380 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:40:44.017082   22380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:40:44.022340   22380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 09:40:44.032420   22380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 09:40:44.042166   22380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 09:40:44.046354   22380 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 09:40:44.046402   22380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 09:40:44.051649   22380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 09:40:44.061334   22380 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 09:40:44.065307   22380 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:40:44.065358   22380 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-799339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-799339 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:40:44.065454   22380 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 09:40:44.065494   22380 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 09:40:44.102080   22380 cri.go:89] found id: ""
	I0115 09:40:44.102150   22380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 09:40:44.111542   22380 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 09:40:44.120494   22380 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 09:40:44.129295   22380 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 09:40:44.129334   22380 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0115 09:40:44.179834   22380 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0115 09:40:44.179913   22380 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 09:40:44.312960   22380 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 09:40:44.313113   22380 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 09:40:44.313248   22380 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 09:40:44.530653   22380 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:40:44.530795   22380 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:40:44.530911   22380 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 09:40:44.657934   22380 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 09:40:44.715230   22380 out.go:204]   - Generating certificates and keys ...
	I0115 09:40:44.715358   22380 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 09:40:44.715441   22380 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 09:40:44.840022   22380 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 09:40:44.950157   22380 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 09:40:45.106720   22380 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 09:40:45.188375   22380 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 09:40:45.319964   22380 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 09:40:45.320164   22380 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-799339 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I0115 09:40:45.592125   22380 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 09:40:45.592347   22380 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-799339 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I0115 09:40:45.758171   22380 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 09:40:45.999383   22380 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 09:40:46.057814   22380 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 09:40:46.057917   22380 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 09:40:46.170400   22380 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 09:40:46.249328   22380 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 09:40:46.327685   22380 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 09:40:46.475529   22380 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 09:40:46.476198   22380 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 09:40:46.478234   22380 out.go:204]   - Booting up control plane ...
	I0115 09:40:46.478337   22380 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 09:40:46.482846   22380 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 09:40:46.485664   22380 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 09:40:46.485773   22380 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 09:40:46.489693   22380 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 09:40:55.990590   22380 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503187 seconds
	I0115 09:40:55.990703   22380 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 09:40:56.010179   22380 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 09:40:56.529524   22380 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 09:40:56.529734   22380 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-799339 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0115 09:40:57.036996   22380 kubeadm.go:322] [bootstrap-token] Using token: dz9c92.nl9h280xlkif8161
	I0115 09:40:57.038307   22380 out.go:204]   - Configuring RBAC rules ...
	I0115 09:40:57.038456   22380 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 09:40:57.049911   22380 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 09:40:57.060334   22380 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 09:40:57.063699   22380 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 09:40:57.071924   22380 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 09:40:57.075697   22380 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 09:40:57.086834   22380 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 09:40:57.414463   22380 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 09:40:57.512991   22380 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 09:40:57.514146   22380 kubeadm.go:322] 
	I0115 09:40:57.514210   22380 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 09:40:57.514222   22380 kubeadm.go:322] 
	I0115 09:40:57.514293   22380 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 09:40:57.514300   22380 kubeadm.go:322] 
	I0115 09:40:57.514319   22380 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 09:40:57.514366   22380 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 09:40:57.514406   22380 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 09:40:57.514418   22380 kubeadm.go:322] 
	I0115 09:40:57.514472   22380 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 09:40:57.514554   22380 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 09:40:57.514664   22380 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 09:40:57.514685   22380 kubeadm.go:322] 
	I0115 09:40:57.514811   22380 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 09:40:57.514915   22380 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 09:40:57.514930   22380 kubeadm.go:322] 
	I0115 09:40:57.515101   22380 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dz9c92.nl9h280xlkif8161 \
	I0115 09:40:57.515259   22380 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 \
	I0115 09:40:57.515299   22380 kubeadm.go:322]     --control-plane 
	I0115 09:40:57.515317   22380 kubeadm.go:322] 
	I0115 09:40:57.515390   22380 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 09:40:57.515396   22380 kubeadm.go:322] 
	I0115 09:40:57.515465   22380 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dz9c92.nl9h280xlkif8161 \
	I0115 09:40:57.515553   22380 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 09:40:57.516092   22380 kubeadm.go:322] W0115 09:40:44.163906     967 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0115 09:40:57.516208   22380 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:40:57.516313   22380 kubeadm.go:322] W0115 09:40:46.468656     967 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 09:40:57.516439   22380 kubeadm.go:322] W0115 09:40:46.470066     967 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 09:40:57.516476   22380 cni.go:84] Creating CNI manager for ""
	I0115 09:40:57.516489   22380 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 09:40:57.518173   22380 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 09:40:57.519514   22380 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 09:40:57.543789   22380 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 09:40:57.573574   22380 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 09:40:57.573663   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=ingress-addon-legacy-799339 minikube.k8s.io/updated_at=2024_01_15T09_40_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:40:57.573667   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:40:57.633968   22380 ops.go:34] apiserver oom_adj: -16
	I0115 09:40:57.764247   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:40:58.264658   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:40:58.764517   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:40:59.264456   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:40:59.764406   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:00.264716   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:00.764864   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:01.264648   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:01.765230   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:02.264556   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:02.765108   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:03.264313   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:03.764284   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:04.264998   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:04.764956   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:05.264597   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:05.764412   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:06.264348   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:06.765373   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:07.264864   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:07.764948   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:08.265266   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:08.765295   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:09.265059   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:09.764535   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:10.264618   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:10.764802   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:11.265007   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:11.764542   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:12.264456   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:12.764705   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:13.264888   22380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:41:13.356017   22380 kubeadm.go:1088] duration metric: took 15.782430592s to wait for elevateKubeSystemPrivileges.
	I0115 09:41:13.356053   22380 kubeadm.go:406] StartCluster complete in 29.290697101s
	I0115 09:41:13.356075   22380 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:41:13.356157   22380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:41:13.357122   22380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:41:13.357343   22380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 09:41:13.357425   22380 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 09:41:13.357507   22380 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-799339"
	I0115 09:41:13.357537   22380 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-799339"
	I0115 09:41:13.357547   22380 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-799339"
	I0115 09:41:13.357577   22380 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-799339"
	I0115 09:41:13.357582   22380 config.go:182] Loaded profile config "ingress-addon-legacy-799339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0115 09:41:13.357602   22380 host.go:66] Checking if "ingress-addon-legacy-799339" exists ...
	I0115 09:41:13.357998   22380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:41:13.358036   22380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:41:13.358069   22380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:41:13.358110   22380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:41:13.358104   22380 kapi.go:59] client config for ingress-addon-legacy-799339: &rest.Config{Host:"https://192.168.39.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:41:13.358852   22380 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 09:41:13.373741   22380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0115 09:41:13.374207   22380 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:41:13.374753   22380 main.go:141] libmachine: Using API Version  1
	I0115 09:41:13.374778   22380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:41:13.375149   22380 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:41:13.375708   22380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:41:13.375755   22380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:41:13.376550   22380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I0115 09:41:13.376980   22380 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:41:13.377448   22380 main.go:141] libmachine: Using API Version  1
	I0115 09:41:13.377473   22380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:41:13.377768   22380 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:41:13.378009   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetState
	I0115 09:41:13.380611   22380 kapi.go:59] client config for ingress-addon-legacy-799339: &rest.Config{Host:"https://192.168.39.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:41:13.380929   22380 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-799339"
	I0115 09:41:13.380966   22380 host.go:66] Checking if "ingress-addon-legacy-799339" exists ...
	I0115 09:41:13.381385   22380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:41:13.381414   22380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:41:13.390232   22380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I0115 09:41:13.390705   22380 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:41:13.391215   22380 main.go:141] libmachine: Using API Version  1
	I0115 09:41:13.391241   22380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:41:13.391530   22380 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:41:13.391738   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetState
	I0115 09:41:13.393405   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:41:13.395342   22380 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:41:13.396049   22380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
	I0115 09:41:13.397093   22380 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:41:13.397107   22380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 09:41:13.397121   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:41:13.397504   22380 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:41:13.397885   22380 main.go:141] libmachine: Using API Version  1
	I0115 09:41:13.397898   22380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:41:13.398263   22380 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:41:13.398842   22380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:41:13.398872   22380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:41:13.400050   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:41:13.400437   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:41:13.400516   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:41:13.400605   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:41:13.400795   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:41:13.400934   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:41:13.401087   22380 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa Username:docker}
	I0115 09:41:13.412846   22380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46863
	I0115 09:41:13.413189   22380 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:41:13.413589   22380 main.go:141] libmachine: Using API Version  1
	I0115 09:41:13.413605   22380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:41:13.413888   22380 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:41:13.414028   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetState
	I0115 09:41:13.415524   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .DriverName
	I0115 09:41:13.415731   22380 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 09:41:13.415740   22380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 09:41:13.415751   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHHostname
	I0115 09:41:13.418530   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:41:13.418897   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:2d:66", ip: ""} in network mk-ingress-addon-legacy-799339: {Iface:virbr1 ExpiryTime:2024-01-15 10:40:23 +0000 UTC Type:0 Mac:52:54:00:94:2d:66 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ingress-addon-legacy-799339 Clientid:01:52:54:00:94:2d:66}
	I0115 09:41:13.418914   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | domain ingress-addon-legacy-799339 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:2d:66 in network mk-ingress-addon-legacy-799339
	I0115 09:41:13.419062   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHPort
	I0115 09:41:13.419225   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHKeyPath
	I0115 09:41:13.419376   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .GetSSHUsername
	I0115 09:41:13.419475   22380 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/ingress-addon-legacy-799339/id_rsa Username:docker}
	I0115 09:41:13.522793   22380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 09:41:13.551219   22380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:41:13.584570   22380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 09:41:13.912109   22380 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-799339" context rescaled to 1 replicas
	I0115 09:41:13.912146   22380 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:41:13.913695   22380 out.go:177] * Verifying Kubernetes components...
	I0115 09:41:13.915010   22380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:41:14.006781   22380 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0115 09:41:15.139011   22380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.587755908s)
	I0115 09:41:15.139051   22380 main.go:141] libmachine: Making call to close driver server
	I0115 09:41:15.139065   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .Close
	I0115 09:41:15.139118   22380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.554511231s)
	I0115 09:41:15.139154   22380 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.224125656s)
	I0115 09:41:15.139173   22380 main.go:141] libmachine: Making call to close driver server
	I0115 09:41:15.139194   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .Close
	I0115 09:41:15.139337   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Closing plugin on server side
	I0115 09:41:15.139469   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Closing plugin on server side
	I0115 09:41:15.139489   22380 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:41:15.139505   22380 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:41:15.139515   22380 main.go:141] libmachine: Making call to close driver server
	I0115 09:41:15.139525   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .Close
	I0115 09:41:15.139574   22380 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:41:15.139586   22380 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:41:15.139599   22380 main.go:141] libmachine: Making call to close driver server
	I0115 09:41:15.139610   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .Close
	I0115 09:41:15.139807   22380 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:41:15.139818   22380 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:41:15.139881   22380 kapi.go:59] client config for ingress-addon-legacy-799339: &rest.Config{Host:"https://192.168.39.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:41:15.140199   22380 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-799339" to be "Ready" ...
	I0115 09:41:15.140378   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) DBG | Closing plugin on server side
	I0115 09:41:15.141523   22380 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0115 09:41:15.141545   22380 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:41:15.160883   22380 node_ready.go:49] node "ingress-addon-legacy-799339" has status "Ready":"True"
	I0115 09:41:15.160904   22380 node_ready.go:38] duration metric: took 20.680265ms waiting for node "ingress-addon-legacy-799339" to be "Ready" ...
	I0115 09:41:15.160912   22380 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:41:15.193714   22380 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-k8mgt" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:15.215787   22380 main.go:141] libmachine: Making call to close driver server
	I0115 09:41:15.215816   22380 main.go:141] libmachine: (ingress-addon-legacy-799339) Calling .Close
	I0115 09:41:15.216141   22380 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:41:15.216161   22380 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:41:15.217898   22380 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0115 09:41:15.219580   22380 addons.go:505] enable addons completed in 1.862157745s: enabled=[storage-provisioner default-storageclass]
	I0115 09:41:17.199762   22380 pod_ready.go:102] pod "coredns-66bff467f8-k8mgt" in "kube-system" namespace has status "Ready":"False"
	I0115 09:41:19.200025   22380 pod_ready.go:102] pod "coredns-66bff467f8-k8mgt" in "kube-system" namespace has status "Ready":"False"
	I0115 09:41:21.200680   22380 pod_ready.go:102] pod "coredns-66bff467f8-k8mgt" in "kube-system" namespace has status "Ready":"False"
	I0115 09:41:23.700275   22380 pod_ready.go:102] pod "coredns-66bff467f8-k8mgt" in "kube-system" namespace has status "Ready":"False"
	I0115 09:41:24.701217   22380 pod_ready.go:92] pod "coredns-66bff467f8-k8mgt" in "kube-system" namespace has status "Ready":"True"
	I0115 09:41:24.701240   22380 pod_ready.go:81] duration metric: took 9.507503859s waiting for pod "coredns-66bff467f8-k8mgt" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.701252   22380 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-799339" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.706086   22380 pod_ready.go:92] pod "etcd-ingress-addon-legacy-799339" in "kube-system" namespace has status "Ready":"True"
	I0115 09:41:24.706102   22380 pod_ready.go:81] duration metric: took 4.842539ms waiting for pod "etcd-ingress-addon-legacy-799339" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.706114   22380 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-799339" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.710582   22380 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-799339" in "kube-system" namespace has status "Ready":"True"
	I0115 09:41:24.710602   22380 pod_ready.go:81] duration metric: took 4.48186ms waiting for pod "kube-apiserver-ingress-addon-legacy-799339" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.710610   22380 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-799339" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.714915   22380 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-799339" in "kube-system" namespace has status "Ready":"True"
	I0115 09:41:24.714934   22380 pod_ready.go:81] duration metric: took 4.317365ms waiting for pod "kube-controller-manager-ingress-addon-legacy-799339" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.714945   22380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b7jfx" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.719492   22380 pod_ready.go:92] pod "kube-proxy-b7jfx" in "kube-system" namespace has status "Ready":"True"
	I0115 09:41:24.719511   22380 pod_ready.go:81] duration metric: took 4.559256ms waiting for pod "kube-proxy-b7jfx" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.719521   22380 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-799339" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:24.894817   22380 request.go:629] Waited for 175.250399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-799339
	I0115 09:41:25.095639   22380 request.go:629] Waited for 197.383688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/nodes/ingress-addon-legacy-799339
	I0115 09:41:25.099306   22380 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-799339" in "kube-system" namespace has status "Ready":"True"
	I0115 09:41:25.099324   22380 pod_ready.go:81] duration metric: took 379.795788ms waiting for pod "kube-scheduler-ingress-addon-legacy-799339" in "kube-system" namespace to be "Ready" ...
	I0115 09:41:25.099335   22380 pod_ready.go:38] duration metric: took 9.938414022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:41:25.099355   22380 api_server.go:52] waiting for apiserver process to appear ...
	I0115 09:41:25.099428   22380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 09:41:25.111601   22380 api_server.go:72] duration metric: took 11.199427141s to wait for apiserver process to appear ...
	I0115 09:41:25.111622   22380 api_server.go:88] waiting for apiserver healthz status ...
	I0115 09:41:25.111636   22380 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0115 09:41:25.117629   22380 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0115 09:41:25.118481   22380 api_server.go:141] control plane version: v1.18.20
	I0115 09:41:25.118503   22380 api_server.go:131] duration metric: took 6.8769ms to wait for apiserver health ...
	I0115 09:41:25.118510   22380 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 09:41:25.294815   22380 request.go:629] Waited for 176.25521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/namespaces/kube-system/pods
	I0115 09:41:25.300846   22380 system_pods.go:59] 7 kube-system pods found
	I0115 09:41:25.300872   22380 system_pods.go:61] "coredns-66bff467f8-k8mgt" [b846d6ed-67bb-45b5-b7d1-d78a3c706339] Running
	I0115 09:41:25.300880   22380 system_pods.go:61] "etcd-ingress-addon-legacy-799339" [010cba9b-8a52-4c42-9b75-7f38e507933b] Running
	I0115 09:41:25.300884   22380 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-799339" [12631e71-800e-4b18-934f-5404233b41b9] Running
	I0115 09:41:25.300889   22380 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-799339" [003c30d2-f7af-44a1-903d-463d0ed956d8] Running
	I0115 09:41:25.300892   22380 system_pods.go:61] "kube-proxy-b7jfx" [61d0b85c-4001-43a8-b76c-252a40526328] Running
	I0115 09:41:25.300896   22380 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-799339" [aeb79128-8361-432a-9910-1f56329ce527] Running
	I0115 09:41:25.300900   22380 system_pods.go:61] "storage-provisioner" [ad671afd-6434-40f6-a949-e554850a4708] Running
	I0115 09:41:25.300905   22380 system_pods.go:74] duration metric: took 182.390792ms to wait for pod list to return data ...
	I0115 09:41:25.300915   22380 default_sa.go:34] waiting for default service account to be created ...
	I0115 09:41:25.495311   22380 request.go:629] Waited for 194.334796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/namespaces/default/serviceaccounts
	I0115 09:41:25.497949   22380 default_sa.go:45] found service account: "default"
	I0115 09:41:25.497968   22380 default_sa.go:55] duration metric: took 197.047757ms for default service account to be created ...
	I0115 09:41:25.497975   22380 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 09:41:25.695220   22380 request.go:629] Waited for 197.18641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/namespaces/kube-system/pods
	I0115 09:41:25.700845   22380 system_pods.go:86] 7 kube-system pods found
	I0115 09:41:25.700869   22380 system_pods.go:89] "coredns-66bff467f8-k8mgt" [b846d6ed-67bb-45b5-b7d1-d78a3c706339] Running
	I0115 09:41:25.700874   22380 system_pods.go:89] "etcd-ingress-addon-legacy-799339" [010cba9b-8a52-4c42-9b75-7f38e507933b] Running
	I0115 09:41:25.700879   22380 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-799339" [12631e71-800e-4b18-934f-5404233b41b9] Running
	I0115 09:41:25.700883   22380 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-799339" [003c30d2-f7af-44a1-903d-463d0ed956d8] Running
	I0115 09:41:25.700892   22380 system_pods.go:89] "kube-proxy-b7jfx" [61d0b85c-4001-43a8-b76c-252a40526328] Running
	I0115 09:41:25.700898   22380 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-799339" [aeb79128-8361-432a-9910-1f56329ce527] Running
	I0115 09:41:25.700903   22380 system_pods.go:89] "storage-provisioner" [ad671afd-6434-40f6-a949-e554850a4708] Running
	I0115 09:41:25.700917   22380 system_pods.go:126] duration metric: took 202.935918ms to wait for k8s-apps to be running ...
	I0115 09:41:25.700929   22380 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 09:41:25.700977   22380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:41:25.714201   22380 system_svc.go:56] duration metric: took 13.269232ms WaitForService to wait for kubelet.
	I0115 09:41:25.714214   22380 kubeadm.go:581] duration metric: took 11.802046423s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 09:41:25.714230   22380 node_conditions.go:102] verifying NodePressure condition ...
	I0115 09:41:25.895693   22380 request.go:629] Waited for 181.403766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.118:8443/api/v1/nodes
	I0115 09:41:25.899431   22380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 09:41:25.899461   22380 node_conditions.go:123] node cpu capacity is 2
	I0115 09:41:25.899471   22380 node_conditions.go:105] duration metric: took 185.23677ms to run NodePressure ...
	I0115 09:41:25.899481   22380 start.go:228] waiting for startup goroutines ...
	I0115 09:41:25.899487   22380 start.go:233] waiting for cluster config update ...
	I0115 09:41:25.899496   22380 start.go:242] writing updated cluster config ...
	I0115 09:41:25.899697   22380 ssh_runner.go:195] Run: rm -f paused
	I0115 09:41:25.945458   22380 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0115 09:41:25.947524   22380 out.go:177] 
	W0115 09:41:25.949042   22380 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0115 09:41:25.950379   22380 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0115 09:41:25.951749   22380 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-799339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 09:40:20 UTC, ends at Mon 2024-01-15 09:44:36 UTC. --
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.267122711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d6c92cd5-4583-44d6-9dc8-e63ab2f3cc4b name=/runtime.v1.RuntimeService/Version
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.268023241Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8c33e0ad-8e05-483e-b33d-55ac64cea85a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.268595406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705311876268580135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=8c33e0ad-8e05-483e-b33d-55ac64cea85a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.269048031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c3e0547-c11c-40cf-835f-954bf2910d6a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.269136498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c3e0547-c11c-40cf-835f-954bf2910d6a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.269376620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b64dbd7eb1739503b461d8126a93441dad4bc315b22919ed1d0b2222ccba4f1,PodSandboxId:4bfe5922881b3e5cb191854354a3e1e216f1fe03f6c27101621338dd26aa03d2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705311857464975251,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-c4zt2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4f82180-4e96-49cd-8d30-7ef426095af7,},Annotations:map[string]string{io.kubernetes.container.hash: cb255934,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572f5a628c2f29969bd98dba6dcb6da5c9881e8f2d661d7cf8914d4a17512a3,PodSandboxId:68011cdd4d3a7f1c7754e8b0645fa36ec8c0bd6fa2ffb6b525913899371c2abe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705311716624640202,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 370fe753-c6ef-4033-8f4b-752d94c9c6b6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9fea238,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a2dec17ece1394c9c0ffa2b0f1039218116d02ce8d477048916c8fc9487d8b,PodSandboxId:b1deddf24472e37e153ec985d5da65b19471a8362d4de8c42e0dfa2054c29eac,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705311698101766391,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-jr6vg,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1a2f99ca-e2df-4797-a3be-8b098db5e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: 840870f5,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4614b97830ad4b123387bdd9c2d4b2f47a6010dfb1b9bea1861c11795a107076,PodSandboxId:57413de95cbe349d1915153d16b3edd4b24c95eb00a8bb11bff711a802d95eb4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705311690192570494,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4dbl8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fa01cb6-05ec-4daf-ad17-52cd90cb5841,},Annotations:map[string]string{io.kubernetes.container.hash: f6fc69fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c35d5ae26919f4ecfa649eaca69f09fe2cb3953a02b20b2dbe5362fb6e5b029,PodSandboxId:e259c90558d2177f0d08fc7975a0e8d7226c0afca6767f55e9c48be0a0911d3b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705311689746250860,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6rxqv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358207aa-f95b-4cd9-980f-200ec27901e2,},Annotations:map[string]string{io.kubernetes.container.hash: 67d2c3db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d138daf347f9fa4bf2752bda61d1b26a4bf4012b19afa6584c787f12f06374fb,PodSandboxId:1995323d0cd74b991666594a3aba3b54ee6c2d8d0b6c669c1c42f00a28af16be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705311676665754965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad671afd-6434-40f6-a949-e554850a4708,},Annotations:map[string]string{io.kubernetes.container.hash: 39c87d0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13165918f1928944c4d4980f89f49fbb121b98088e9ab8e537568de7dba2562,PodSandboxId:738a8c1e5054a5bd02993a6e3fa0310a7d807557eb489d3b5e97afe7ccea447d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705311675131764764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-k8mgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b846d6ed-67bb-45b5-b7d1-d78a3c706339,},Annotations:map[string]string{io.kubernetes.container.hash: 4db88546,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582bf854467818145dfd4b83fc25
cbdb422cedbbe2744909913289f1cf2be0c8,PodSandboxId:953fa70daecb40f5b0ebfee73255ee924309b841b9a49d5ffd7381cd80ca51ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705311674742758773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7jfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0b85c-4001-43a8-b76c-252a40526328,},Annotations:map[string]string{io.kubernetes.container.hash: 46c4275a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f033dbd7cde74733b1bb7a7cf0a7aabfa1e1fbb4d2240a849a7971930bbc7a4d,Pod
SandboxId:c16751fcc724baec9922e3fe4570db96cb9a009d66bc0c7920c2a2788e3bfdbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705311649632327296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71eec3b104d14ae53b22c3efb98fd284,},Annotations:map[string]string{io.kubernetes.container.hash: f307eacb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f956d333fd9dd8d53cfad4a8fc4301d60a2ba2505f9cb645caef4f64dd6cde51,PodSandboxId:901ba14ea9e67d3f7f1a6fbd5988e7254800
65bbde2b1324bdd6974fe16f65fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705311648499014139,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59ed4516dc1136b9fac56f7fab1409bb13f688dd51718287572eff5a03dda9a,PodSandboxId:ea0f2bed38e85e6ef5b3f083eecef037ddc8d24dc1
9c008ebb46464202e6da84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705311647938479236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,},Annotations:map[string]string{io.kubernetes.container.hash: 34729005,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1d128bb559f34d57b181d107dc3f0a917079eebe10b42c654fa22c2bce6a5c,PodSandboxId:621d493e4d720aade2a5c2ee49d3e89a3d56bf4db295132d
7a0b9e342982866f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705311647830013604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c3e0547-c11c-40cf-835f-954bf2910d6a name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.309650644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=889e19e0-cdbe-480a-bf84-84c9e0f1ab1c name=/runtime.v1.RuntimeService/Version
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.309721458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=889e19e0-cdbe-480a-bf84-84c9e0f1ab1c name=/runtime.v1.RuntimeService/Version
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.313058434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=46cc39ce-34cd-4012-a18a-718a26526637 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.313663267Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705311876313648011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=46cc39ce-34cd-4012-a18a-718a26526637 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.314223937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3caef500-e3e6-4bd7-9dfc-73e78d55466e name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.314327296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3caef500-e3e6-4bd7-9dfc-73e78d55466e name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.314671845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b64dbd7eb1739503b461d8126a93441dad4bc315b22919ed1d0b2222ccba4f1,PodSandboxId:4bfe5922881b3e5cb191854354a3e1e216f1fe03f6c27101621338dd26aa03d2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705311857464975251,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-c4zt2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4f82180-4e96-49cd-8d30-7ef426095af7,},Annotations:map[string]string{io.kubernetes.container.hash: cb255934,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572f5a628c2f29969bd98dba6dcb6da5c9881e8f2d661d7cf8914d4a17512a3,PodSandboxId:68011cdd4d3a7f1c7754e8b0645fa36ec8c0bd6fa2ffb6b525913899371c2abe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705311716624640202,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 370fe753-c6ef-4033-8f4b-752d94c9c6b6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9fea238,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a2dec17ece1394c9c0ffa2b0f1039218116d02ce8d477048916c8fc9487d8b,PodSandboxId:b1deddf24472e37e153ec985d5da65b19471a8362d4de8c42e0dfa2054c29eac,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705311698101766391,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-jr6vg,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1a2f99ca-e2df-4797-a3be-8b098db5e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: 840870f5,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4614b97830ad4b123387bdd9c2d4b2f47a6010dfb1b9bea1861c11795a107076,PodSandboxId:57413de95cbe349d1915153d16b3edd4b24c95eb00a8bb11bff711a802d95eb4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705311690192570494,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4dbl8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fa01cb6-05ec-4daf-ad17-52cd90cb5841,},Annotations:map[string]string{io.kubernetes.container.hash: f6fc69fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c35d5ae26919f4ecfa649eaca69f09fe2cb3953a02b20b2dbe5362fb6e5b029,PodSandboxId:e259c90558d2177f0d08fc7975a0e8d7226c0afca6767f55e9c48be0a0911d3b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705311689746250860,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6rxqv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358207aa-f95b-4cd9-980f-200ec27901e2,},Annotations:map[string]string{io.kubernetes.container.hash: 67d2c3db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d138daf347f9fa4bf2752bda61d1b26a4bf4012b19afa6584c787f12f06374fb,PodSandboxId:1995323d0cd74b991666594a3aba3b54ee6c2d8d0b6c669c1c42f00a28af16be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705311676665754965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad671afd-6434-40f6-a949-e554850a4708,},Annotations:map[string]string{io.kubernetes.container.hash: 39c87d0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13165918f1928944c4d4980f89f49fbb121b98088e9ab8e537568de7dba2562,PodSandboxId:738a8c1e5054a5bd02993a6e3fa0310a7d807557eb489d3b5e97afe7ccea447d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705311675131764764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-k8mgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b846d6ed-67bb-45b5-b7d1-d78a3c706339,},Annotations:map[string]string{io.kubernetes.container.hash: 4db88546,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582bf854467818145dfd4b83fc25
cbdb422cedbbe2744909913289f1cf2be0c8,PodSandboxId:953fa70daecb40f5b0ebfee73255ee924309b841b9a49d5ffd7381cd80ca51ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705311674742758773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7jfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0b85c-4001-43a8-b76c-252a40526328,},Annotations:map[string]string{io.kubernetes.container.hash: 46c4275a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f033dbd7cde74733b1bb7a7cf0a7aabfa1e1fbb4d2240a849a7971930bbc7a4d,Pod
SandboxId:c16751fcc724baec9922e3fe4570db96cb9a009d66bc0c7920c2a2788e3bfdbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705311649632327296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71eec3b104d14ae53b22c3efb98fd284,},Annotations:map[string]string{io.kubernetes.container.hash: f307eacb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f956d333fd9dd8d53cfad4a8fc4301d60a2ba2505f9cb645caef4f64dd6cde51,PodSandboxId:901ba14ea9e67d3f7f1a6fbd5988e7254800
65bbde2b1324bdd6974fe16f65fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705311648499014139,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59ed4516dc1136b9fac56f7fab1409bb13f688dd51718287572eff5a03dda9a,PodSandboxId:ea0f2bed38e85e6ef5b3f083eecef037ddc8d24dc1
9c008ebb46464202e6da84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705311647938479236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,},Annotations:map[string]string{io.kubernetes.container.hash: 34729005,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1d128bb559f34d57b181d107dc3f0a917079eebe10b42c654fa22c2bce6a5c,PodSandboxId:621d493e4d720aade2a5c2ee49d3e89a3d56bf4db295132d
7a0b9e342982866f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705311647830013604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3caef500-e3e6-4bd7-9dfc-73e78d55466e name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.321017074Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=bb2de5b7-8be4-4ec8-abaa-4c8e7955f82a name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.321779220Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4bfe5922881b3e5cb191854354a3e1e216f1fe03f6c27101621338dd26aa03d2,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-c4zt2,Uid:b4f82180-4e96-49cd-8d30-7ef426095af7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705311854979403519,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-c4zt2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4f82180-4e96-49cd-8d30-7ef426095af7,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:44:14.628016241Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68011cdd4d3a7f1c7754e8b0645fa36ec8c0bd6fa2ffb6b525913899371c2abe,Metadata:&PodSandboxMetadata{Name:nginx,Uid:370fe753-c6ef-4033-8f4b-752d94c9c6b6,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705311713163672380,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 370fe753-c6ef-4033-8f4b-752d94c9c6b6,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:41:52.825024751Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cec7d9b2572ba13464745d00f13af7efeeafe1db373d1c624d3630fb1d1d11cd,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1705311699873132919,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configura
tion: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-01-15T09:41:39.531724271Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1deddf24472e37e153ec985d5da65b19471a8362d4de8c42e0dfa2054c29eac,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-jr6vg,Uid:1a2f99ca-e2df-4797-a3be
-8b098db5e3ba,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1705311690678667978,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-jr6vg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1a2f99ca-e2df-4797-a3be-8b098db5e3ba,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:41:26.743085551Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:57413de95cbe349d1915153d16b3edd4b24c95eb00a8bb11bff711a802d95eb4,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-4dbl8,Uid:1fa01cb6-05ec-4daf-ad17-52cd90cb5841,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1705311687234088664,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/ins
tance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: f98b9af0-d3f4-4a8b-9726-6acf37d53680,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-4dbl8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fa01cb6-05ec-4daf-ad17-52cd90cb5841,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:41:26.892823658Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e259c90558d2177f0d08fc7975a0e8d7226c0afca6767f55e9c48be0a0911d3b,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-6rxqv,Uid:358207aa-f95b-4cd9-980f-200ec27901e2,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1705311687153356867,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 93f2d499-36d0-44f9-860e-e38c2ba3251f,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: ingress-nginx-admission-create-6rxqv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358207aa-f95b-4cd9-980f-200ec27901e2,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:41:26.818796651Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1995323d0cd74b991666594a3aba3b54ee6c2d8d0b6c669c1c42f00a28af16be,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ad671afd-6434-40f6-a949-e554850a4708,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705311676380019013,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad671afd-6434-40f6-a949-e554850a4708,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annota
tions\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-15T09:41:15.143755434Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:738a8c1e5054a5bd02993a6e3fa0310a7d807557eb489d3b5e97afe7ccea447d,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-k8mgt,Uid:b846d6ed-67bb-45b5-b7d1-d78a3c706339,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705311674958159970,Labels:map[string]string{io.kubernetes.container.
name: POD,io.kubernetes.pod.name: coredns-66bff467f8-k8mgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b846d6ed-67bb-45b5-b7d1-d78a3c706339,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:41:14.619075221Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:953fa70daecb40f5b0ebfee73255ee924309b841b9a49d5ffd7381cd80ca51ed,Metadata:&PodSandboxMetadata{Name:kube-proxy-b7jfx,Uid:61d0b85c-4001-43a8-b76c-252a40526328,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705311674426801546,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b7jfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0b85c-4001-43a8-b76c-252a40526328,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:41:12.590315377Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:c16751fcc724baec9922e3fe4570db96cb9a009d66bc0c7920c2a2788e3bfdbf,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-799339,Uid:71eec3b104d14ae53b22c3efb98fd284,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705311647507445157,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71eec3b104d14ae53b22c3efb98fd284,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.118:2379,kubernetes.io/config.hash: 71eec3b104d14ae53b22c3efb98fd284,kubernetes.io/config.seen: 2024-01-15T09:40:46.485260642Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ea0f2bed38e85e6ef5b3f083eecef037ddc8d24dc19c008ebb46464202e6da84,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-legacy-799339,Uid:557108d0824c209762d74d1fb6913635,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705311647502770054,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.118:8443,kubernetes.io/config.hash: 557108d0824c209762d74d1fb6913635,kubernetes.io/config.seen: 2024-01-15T09:40:46.480310488Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:901ba14ea9e67d3f7f1a6fbd5988e725480065bbde2b1324bdd6974fe16f65fc,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-799339,Uid:d12e497b0008e22acbcd5a9cf2dd48ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705311647460231336,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-
ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernetes.io/config.seen: 2024-01-15T09:40:46.483860119Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:621d493e4d720aade2a5c2ee49d3e89a3d56bf4db295132d7a0b9e342982866f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-799339,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705311647423277813,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e27827b1f8d737725,kubernete
s.io/config.seen: 2024-01-15T09:40:46.482316593Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=bb2de5b7-8be4-4ec8-abaa-4c8e7955f82a name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.322582514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c4fe6c6-00cd-4eff-9835-93e634bd445c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.322689339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c4fe6c6-00cd-4eff-9835-93e634bd445c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.322940924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b64dbd7eb1739503b461d8126a93441dad4bc315b22919ed1d0b2222ccba4f1,PodSandboxId:4bfe5922881b3e5cb191854354a3e1e216f1fe03f6c27101621338dd26aa03d2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705311857464975251,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-c4zt2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4f82180-4e96-49cd-8d30-7ef426095af7,},Annotations:map[string]string{io.kubernetes.container.hash: cb255934,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572f5a628c2f29969bd98dba6dcb6da5c9881e8f2d661d7cf8914d4a17512a3,PodSandboxId:68011cdd4d3a7f1c7754e8b0645fa36ec8c0bd6fa2ffb6b525913899371c2abe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705311716624640202,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 370fe753-c6ef-4033-8f4b-752d94c9c6b6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9fea238,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a2dec17ece1394c9c0ffa2b0f1039218116d02ce8d477048916c8fc9487d8b,PodSandboxId:b1deddf24472e37e153ec985d5da65b19471a8362d4de8c42e0dfa2054c29eac,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705311698101766391,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-jr6vg,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1a2f99ca-e2df-4797-a3be-8b098db5e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: 840870f5,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4614b97830ad4b123387bdd9c2d4b2f47a6010dfb1b9bea1861c11795a107076,PodSandboxId:57413de95cbe349d1915153d16b3edd4b24c95eb00a8bb11bff711a802d95eb4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705311690192570494,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4dbl8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fa01cb6-05ec-4daf-ad17-52cd90cb5841,},Annotations:map[string]string{io.kubernetes.container.hash: f6fc69fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c35d5ae26919f4ecfa649eaca69f09fe2cb3953a02b20b2dbe5362fb6e5b029,PodSandboxId:e259c90558d2177f0d08fc7975a0e8d7226c0afca6767f55e9c48be0a0911d3b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705311689746250860,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6rxqv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358207aa-f95b-4cd9-980f-200ec27901e2,},Annotations:map[string]string{io.kubernetes.container.hash: 67d2c3db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d138daf347f9fa4bf2752bda61d1b26a4bf4012b19afa6584c787f12f06374fb,PodSandboxId:1995323d0cd74b991666594a3aba3b54ee6c2d8d0b6c669c1c42f00a28af16be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705311676665754965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad671afd-6434-40f6-a949-e554850a4708,},Annotations:map[string]string{io.kubernetes.container.hash: 39c87d0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13165918f1928944c4d4980f89f49fbb121b98088e9ab8e537568de7dba2562,PodSandboxId:738a8c1e5054a5bd02993a6e3fa0310a7d807557eb489d3b5e97afe7ccea447d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705311675131764764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-k8mgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b846d6ed-67bb-45b5-b7d1-d78a3c706339,},Annotations:map[string]string{io.kubernetes.container.hash: 4db88546,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582bf854467818145dfd4b83fc25
cbdb422cedbbe2744909913289f1cf2be0c8,PodSandboxId:953fa70daecb40f5b0ebfee73255ee924309b841b9a49d5ffd7381cd80ca51ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705311674742758773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7jfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0b85c-4001-43a8-b76c-252a40526328,},Annotations:map[string]string{io.kubernetes.container.hash: 46c4275a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f033dbd7cde74733b1bb7a7cf0a7aabfa1e1fbb4d2240a849a7971930bbc7a4d,Pod
SandboxId:c16751fcc724baec9922e3fe4570db96cb9a009d66bc0c7920c2a2788e3bfdbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705311649632327296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71eec3b104d14ae53b22c3efb98fd284,},Annotations:map[string]string{io.kubernetes.container.hash: f307eacb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f956d333fd9dd8d53cfad4a8fc4301d60a2ba2505f9cb645caef4f64dd6cde51,PodSandboxId:901ba14ea9e67d3f7f1a6fbd5988e7254800
65bbde2b1324bdd6974fe16f65fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705311648499014139,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59ed4516dc1136b9fac56f7fab1409bb13f688dd51718287572eff5a03dda9a,PodSandboxId:ea0f2bed38e85e6ef5b3f083eecef037ddc8d24dc1
9c008ebb46464202e6da84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705311647938479236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,},Annotations:map[string]string{io.kubernetes.container.hash: 34729005,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1d128bb559f34d57b181d107dc3f0a917079eebe10b42c654fa22c2bce6a5c,PodSandboxId:621d493e4d720aade2a5c2ee49d3e89a3d56bf4db295132d
7a0b9e342982866f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705311647830013604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c4fe6c6-00cd-4eff-9835-93e634bd445c name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.351061177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1ce1a330-ea67-45fe-9dac-497c04302379 name=/runtime.v1.RuntimeService/Version
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.351171098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1ce1a330-ea67-45fe-9dac-497c04302379 name=/runtime.v1.RuntimeService/Version
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.352662498Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2f678a72-440f-49c1-ba8d-5f2388ab41a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.353110697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705311876353099958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=2f678a72-440f-49c1-ba8d-5f2388ab41a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.353719030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=30b3a071-f135-4cc3-a971-851aecbfa231 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.353767256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=30b3a071-f135-4cc3-a971-851aecbfa231 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:44:36 ingress-addon-legacy-799339 crio[728]: time="2024-01-15 09:44:36.354016336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b64dbd7eb1739503b461d8126a93441dad4bc315b22919ed1d0b2222ccba4f1,PodSandboxId:4bfe5922881b3e5cb191854354a3e1e216f1fe03f6c27101621338dd26aa03d2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705311857464975251,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-c4zt2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4f82180-4e96-49cd-8d30-7ef426095af7,},Annotations:map[string]string{io.kubernetes.container.hash: cb255934,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572f5a628c2f29969bd98dba6dcb6da5c9881e8f2d661d7cf8914d4a17512a3,PodSandboxId:68011cdd4d3a7f1c7754e8b0645fa36ec8c0bd6fa2ffb6b525913899371c2abe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705311716624640202,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 370fe753-c6ef-4033-8f4b-752d94c9c6b6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9fea238,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a2dec17ece1394c9c0ffa2b0f1039218116d02ce8d477048916c8fc9487d8b,PodSandboxId:b1deddf24472e37e153ec985d5da65b19471a8362d4de8c42e0dfa2054c29eac,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705311698101766391,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-jr6vg,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1a2f99ca-e2df-4797-a3be-8b098db5e3ba,},Annotations:map[string]string{io.kubernetes.container.hash: 840870f5,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4614b97830ad4b123387bdd9c2d4b2f47a6010dfb1b9bea1861c11795a107076,PodSandboxId:57413de95cbe349d1915153d16b3edd4b24c95eb00a8bb11bff711a802d95eb4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705311690192570494,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4dbl8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fa01cb6-05ec-4daf-ad17-52cd90cb5841,},Annotations:map[string]string{io.kubernetes.container.hash: f6fc69fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c35d5ae26919f4ecfa649eaca69f09fe2cb3953a02b20b2dbe5362fb6e5b029,PodSandboxId:e259c90558d2177f0d08fc7975a0e8d7226c0afca6767f55e9c48be0a0911d3b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705311689746250860,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6rxqv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358207aa-f95b-4cd9-980f-200ec27901e2,},Annotations:map[string]string{io.kubernetes.container.hash: 67d2c3db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d138daf347f9fa4bf2752bda61d1b26a4bf4012b19afa6584c787f12f06374fb,PodSandboxId:1995323d0cd74b991666594a3aba3b54ee6c2d8d0b6c669c1c42f00a28af16be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705311676665754965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad671afd-6434-40f6-a949-e554850a4708,},Annotations:map[string]string{io.kubernetes.container.hash: 39c87d0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13165918f1928944c4d4980f89f49fbb121b98088e9ab8e537568de7dba2562,PodSandboxId:738a8c1e5054a5bd02993a6e3fa0310a7d807557eb489d3b5e97afe7ccea447d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705311675131764764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-k8mgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b846d6ed-67bb-45b5-b7d1-d78a3c706339,},Annotations:map[string]string{io.kubernetes.container.hash: 4db88546,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582bf854467818145dfd4b83fc25
cbdb422cedbbe2744909913289f1cf2be0c8,PodSandboxId:953fa70daecb40f5b0ebfee73255ee924309b841b9a49d5ffd7381cd80ca51ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705311674742758773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7jfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d0b85c-4001-43a8-b76c-252a40526328,},Annotations:map[string]string{io.kubernetes.container.hash: 46c4275a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f033dbd7cde74733b1bb7a7cf0a7aabfa1e1fbb4d2240a849a7971930bbc7a4d,Pod
SandboxId:c16751fcc724baec9922e3fe4570db96cb9a009d66bc0c7920c2a2788e3bfdbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705311649632327296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71eec3b104d14ae53b22c3efb98fd284,},Annotations:map[string]string{io.kubernetes.container.hash: f307eacb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f956d333fd9dd8d53cfad4a8fc4301d60a2ba2505f9cb645caef4f64dd6cde51,PodSandboxId:901ba14ea9e67d3f7f1a6fbd5988e7254800
65bbde2b1324bdd6974fe16f65fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705311648499014139,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59ed4516dc1136b9fac56f7fab1409bb13f688dd51718287572eff5a03dda9a,PodSandboxId:ea0f2bed38e85e6ef5b3f083eecef037ddc8d24dc1
9c008ebb46464202e6da84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705311647938479236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557108d0824c209762d74d1fb6913635,},Annotations:map[string]string{io.kubernetes.container.hash: 34729005,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1d128bb559f34d57b181d107dc3f0a917079eebe10b42c654fa22c2bce6a5c,PodSandboxId:621d493e4d720aade2a5c2ee49d3e89a3d56bf4db295132d
7a0b9e342982866f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705311647830013604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-799339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=30b3a071-f135-4cc3-a971-851aecbfa231 name=/runtime.v1.RuntimeSer
vice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5b64dbd7eb173       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            18 seconds ago      Running             hello-world-app           0                   4bfe5922881b3       hello-world-app-5f5d8b66bb-c4zt2
	e572f5a628c2f       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   68011cdd4d3a7       nginx
	b5a2dec17ece1       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   b1deddf24472e       ingress-nginx-controller-7fcf777cb7-jr6vg
	4614b97830ad4       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   57413de95cbe3       ingress-nginx-admission-patch-4dbl8
	7c35d5ae26919       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   e259c90558d21       ingress-nginx-admission-create-6rxqv
	d138daf347f9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   1995323d0cd74       storage-provisioner
	a13165918f192       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   738a8c1e5054a       coredns-66bff467f8-k8mgt
	582bf85446781       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   953fa70daecb4       kube-proxy-b7jfx
	f033dbd7cde74       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   c16751fcc724b       etcd-ingress-addon-legacy-799339
	f956d333fd9dd       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   901ba14ea9e67       kube-scheduler-ingress-addon-legacy-799339
	a59ed4516dc11       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   ea0f2bed38e85       kube-apiserver-ingress-addon-legacy-799339
	ec1d128bb559f       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   621d493e4d720       kube-controller-manager-ingress-addon-legacy-799339
	
	
	==> coredns [a13165918f1928944c4d4980f89f49fbb121b98088e9ab8e537568de7dba2562] <==
	[INFO] 10.244.0.5:36919 - 30800 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068867s
	[INFO] 10.244.0.5:56921 - 43951 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000112253s
	[INFO] 10.244.0.5:56921 - 15486 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000138189s
	[INFO] 10.244.0.5:36919 - 44463 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047642s
	[INFO] 10.244.0.5:56921 - 18855 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074292s
	[INFO] 10.244.0.5:36919 - 48890 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037088s
	[INFO] 10.244.0.5:56921 - 51800 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000205772s
	[INFO] 10.244.0.5:36919 - 62747 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000253284s
	[INFO] 10.244.0.5:36919 - 51521 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036288s
	[INFO] 10.244.0.5:36919 - 18545 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098242s
	[INFO] 10.244.0.5:36919 - 42358 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000103202s
	[INFO] 10.244.0.5:48500 - 49212 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000181483s
	[INFO] 10.244.0.5:51472 - 26336 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000288226s
	[INFO] 10.244.0.5:48500 - 25076 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000229214s
	[INFO] 10.244.0.5:48500 - 12934 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034581s
	[INFO] 10.244.0.5:51472 - 32196 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0000274s
	[INFO] 10.244.0.5:48500 - 35463 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036748s
	[INFO] 10.244.0.5:51472 - 11913 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023852s
	[INFO] 10.244.0.5:51472 - 39200 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000118435s
	[INFO] 10.244.0.5:48500 - 22598 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071325s
	[INFO] 10.244.0.5:51472 - 17442 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000094888s
	[INFO] 10.244.0.5:48500 - 8667 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083741s
	[INFO] 10.244.0.5:51472 - 64785 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062742s
	[INFO] 10.244.0.5:48500 - 54706 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065393s
	[INFO] 10.244.0.5:51472 - 21212 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057421s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-799339
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-799339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=ingress-addon-legacy-799339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T09_40_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 09:40:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-799339
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 09:44:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 09:44:28 +0000   Mon, 15 Jan 2024 09:40:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 09:44:28 +0000   Mon, 15 Jan 2024 09:40:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 09:44:28 +0000   Mon, 15 Jan 2024 09:40:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 09:44:28 +0000   Mon, 15 Jan 2024 09:41:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    ingress-addon-legacy-799339
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 e56f269c2ba3445bb245043187437f23
	  System UUID:                e56f269c-2ba3-445b-b245-043187437f23
	  Boot ID:                    790dbc98-b1c3-47c4-8487-32434f68c44c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-c4zt2                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-k8mgt                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m24s
	  kube-system                 etcd-ingress-addon-legacy-799339                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-799339             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-799339    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-proxy-b7jfx                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-scheduler-ingress-addon-legacy-799339             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m50s (x5 over 3m50s)  kubelet     Node ingress-addon-legacy-799339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x5 over 3m50s)  kubelet     Node ingress-addon-legacy-799339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x4 over 3m50s)  kubelet     Node ingress-addon-legacy-799339 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s                  kubelet     Node ingress-addon-legacy-799339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s                  kubelet     Node ingress-addon-legacy-799339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s                  kubelet     Node ingress-addon-legacy-799339 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m29s                  kubelet     Node ingress-addon-legacy-799339 status is now: NodeReady
	  Normal  Starting                 3m22s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan15 09:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.094295] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.368337] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.393835] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148385] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.035493] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.605845] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.106001] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.151965] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.115965] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.209242] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[  +7.598293] systemd-fstab-generator[1036]: Ignoring "noauto" for root device
	[  +2.028630] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +10.539392] systemd-fstab-generator[1413]: Ignoring "noauto" for root device
	[Jan15 09:41] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.492469] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.640745] kauditd_printk_skb: 6 callbacks suppressed
	[ +22.361433] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.234551] kauditd_printk_skb: 3 callbacks suppressed
	[Jan15 09:44] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [f033dbd7cde74733b1bb7a7cf0a7aabfa1e1fbb4d2240a849a7971930bbc7a4d] <==
	raft2024/01/15 09:40:49 INFO: 86c29206b457f123 became follower at term 1
	raft2024/01/15 09:40:49 INFO: 86c29206b457f123 switched to configuration voters=(9710484304057332003)
	2024-01-15 09:40:49.781117 W | auth: simple token is not cryptographically signed
	2024-01-15 09:40:49.784851 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-15 09:40:49.786200 I | etcdserver: 86c29206b457f123 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/15 09:40:49 INFO: 86c29206b457f123 switched to configuration voters=(9710484304057332003)
	2024-01-15 09:40:49.786984 I | etcdserver/membership: added member 86c29206b457f123 [https://192.168.39.118:2380] to cluster 56e4fbef5627b38f
	2024-01-15 09:40:49.788122 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-15 09:40:49.788212 I | embed: listening for peers on 192.168.39.118:2380
	2024-01-15 09:40:49.788276 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/15 09:40:50 INFO: 86c29206b457f123 is starting a new election at term 1
	raft2024/01/15 09:40:50 INFO: 86c29206b457f123 became candidate at term 2
	raft2024/01/15 09:40:50 INFO: 86c29206b457f123 received MsgVoteResp from 86c29206b457f123 at term 2
	raft2024/01/15 09:40:50 INFO: 86c29206b457f123 became leader at term 2
	raft2024/01/15 09:40:50 INFO: raft.node: 86c29206b457f123 elected leader 86c29206b457f123 at term 2
	2024-01-15 09:40:50.573898 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-15 09:40:50.575474 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-15 09:40:50.576005 I | etcdserver: published {Name:ingress-addon-legacy-799339 ClientURLs:[https://192.168.39.118:2379]} to cluster 56e4fbef5627b38f
	2024-01-15 09:40:50.576157 I | embed: ready to serve client requests
	2024-01-15 09:40:50.576379 I | embed: ready to serve client requests
	2024-01-15 09:40:50.577405 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-15 09:40:50.577653 I | embed: serving client requests on 192.168.39.118:2379
	2024-01-15 09:40:50.577713 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-15 09:41:13.234867 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/default\" " with result "range_response_count:1 size:181" took too long (552.034298ms) to execute
	2024-01-15 09:41:13.237325 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (363.571837ms) to execute
	
	
	==> kernel <==
	 09:44:36 up 4 min,  0 users,  load average: 0.44, 0.46, 0.21
	Linux ingress-addon-legacy-799339 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a59ed4516dc1136b9fac56f7fab1409bb13f688dd51718287572eff5a03dda9a] <==
	I0115 09:40:54.300461       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0115 09:40:54.303804       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0115 09:40:55.195846       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0115 09:40:55.195873       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0115 09:40:55.202071       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0115 09:40:55.208839       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0115 09:40:55.208876       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0115 09:40:55.666286       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0115 09:40:55.712251       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0115 09:40:55.855841       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.118]
	I0115 09:40:55.856683       1 controller.go:609] quota admission added evaluator for: endpoints
	I0115 09:40:55.862409       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0115 09:40:56.549826       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0115 09:40:57.324293       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0115 09:40:57.484680       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0115 09:40:57.825174       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0115 09:41:12.514734       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0115 09:41:12.519029       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0115 09:41:13.230978       1 trace.go:116] Trace[504917986]: "Create" url:/api/v1/namespaces/kube-public/serviceaccounts,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:service-account-controller,client:192.168.39.118 (started: 2024-01-15 09:41:12.680799462 +0000 UTC m=+24.611608043) (total time: 550.150738ms):
	Trace[504917986]: [550.12157ms] [550.083059ms] Object stored in database
	I0115 09:41:13.235831       1 trace.go:116] Trace[417075086]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/default,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/tokens-controller,client:192.168.39.118 (started: 2024-01-15 09:41:12.681464947 +0000 UTC m=+24.612273530) (total time: 554.345558ms):
	Trace[417075086]: [554.323832ms] [554.318396ms] About to write a response
	I0115 09:41:26.756262       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0115 09:41:52.649761       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0115 09:44:28.874848       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [ec1d128bb559f34d57b181d107dc3f0a917079eebe10b42c654fa22c2bce6a5c] <==
	I0115 09:41:12.809859       1 shared_informer.go:230] Caches are synced for attach detach 
	I0115 09:41:13.010605       1 shared_informer.go:230] Caches are synced for taint 
	I0115 09:41:13.010773       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0115 09:41:13.010852       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-799339. Assuming now as a timestamp.
	I0115 09:41:13.010923       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0115 09:41:13.011314       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0115 09:41:13.012203       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-799339", UID:"70062893-863a-4ea8-8bf1-8f4d38433ace", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-799339 event: Registered Node ingress-addon-legacy-799339 in Controller
	I0115 09:41:13.017606       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0115 09:41:13.061172       1 shared_informer.go:230] Caches are synced for resource quota 
	I0115 09:41:13.075309       1 shared_informer.go:230] Caches are synced for disruption 
	I0115 09:41:13.075351       1 disruption.go:339] Sending events to api server.
	I0115 09:41:13.091097       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0115 09:41:13.091150       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0115 09:41:13.109886       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0115 09:41:13.109975       1 shared_informer.go:230] Caches are synced for resource quota 
	I0115 09:41:13.383962       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"355e5bd3-7f2e-4db9-a273-6ed7c0fd0532", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0115 09:41:13.456064       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c7a5ccd5-517a-4c8e-ad67-0fe998ab97a6", APIVersion:"apps/v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-bjspl
	I0115 09:41:26.712795       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"94bf1b16-f436-4e81-bdc4-d034142b13b9", APIVersion:"apps/v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0115 09:41:26.725392       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3f521cbf-bdbd-4d85-aadb-08182cfa0e9e", APIVersion:"apps/v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-jr6vg
	I0115 09:41:26.795040       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"93f2d499-36d0-44f9-860e-e38c2ba3251f", APIVersion:"batch/v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-6rxqv
	I0115 09:41:26.863359       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f98b9af0-d3f4-4a8b-9726-6acf37d53680", APIVersion:"batch/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-4dbl8
	I0115 09:41:30.074130       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"93f2d499-36d0-44f9-860e-e38c2ba3251f", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0115 09:41:31.091170       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f98b9af0-d3f4-4a8b-9726-6acf37d53680", APIVersion:"batch/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0115 09:44:14.610154       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"6d999a0a-2ec4-45e4-a35c-9ecf4eaa3b7c", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0115 09:44:14.617240       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"4120eab2-b438-4e0a-994c-eaeb3f430a4b", APIVersion:"apps/v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-c4zt2
	
	
	==> kube-proxy [582bf854467818145dfd4b83fc25cbdb422cedbbe2744909913289f1cf2be0c8] <==
	W0115 09:41:14.924110       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0115 09:41:14.934884       1 node.go:136] Successfully retrieved node IP: 192.168.39.118
	I0115 09:41:14.934947       1 server_others.go:186] Using iptables Proxier.
	I0115 09:41:14.935155       1 server.go:583] Version: v1.18.20
	I0115 09:41:14.936643       1 config.go:315] Starting service config controller
	I0115 09:41:14.940483       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0115 09:41:14.939317       1 config.go:133] Starting endpoints config controller
	I0115 09:41:14.940783       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0115 09:41:15.044830       1 shared_informer.go:230] Caches are synced for service config 
	I0115 09:41:15.045124       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [f956d333fd9dd8d53cfad4a8fc4301d60a2ba2505f9cb645caef4f64dd6cde51] <==
	I0115 09:40:54.303962       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0115 09:40:54.304000       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0115 09:40:54.306017       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0115 09:40:54.306123       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 09:40:54.306130       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 09:40:54.306188       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0115 09:40:54.307933       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 09:40:54.309002       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 09:40:54.310883       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 09:40:54.310981       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 09:40:54.311035       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 09:40:54.311084       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 09:40:54.311127       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 09:40:54.311172       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 09:40:54.311217       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 09:40:54.311264       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 09:40:54.311308       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 09:40:54.311818       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 09:40:55.239018       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 09:40:55.261215       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 09:40:55.354926       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 09:40:55.442958       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0115 09:40:55.806238       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0115 09:41:12.582988       1 factory.go:503] pod: kube-system/coredns-66bff467f8-bjspl is already present in the active queue
	E0115 09:41:12.635688       1 factory.go:503] pod: kube-system/coredns-66bff467f8-k8mgt is already present in the active queue
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 09:40:20 UTC, ends at Mon 2024-01-15 09:44:36 UTC. --
	Jan 15 09:41:32 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:41:32.224230    1420 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa01cb6-05ec-4daf-ad17-52cd90cb5841-ingress-nginx-admission-token-9x4bp" (OuterVolumeSpecName: "ingress-nginx-admission-token-9x4bp") pod "1fa01cb6-05ec-4daf-ad17-52cd90cb5841" (UID: "1fa01cb6-05ec-4daf-ad17-52cd90cb5841"). InnerVolumeSpecName "ingress-nginx-admission-token-9x4bp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 09:41:32 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:41:32.304409    1420 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-9x4bp" (UniqueName: "kubernetes.io/secret/1fa01cb6-05ec-4daf-ad17-52cd90cb5841-ingress-nginx-admission-token-9x4bp") on node "ingress-addon-legacy-799339" DevicePath ""
	Jan 15 09:41:32 ingress-addon-legacy-799339 kubelet[1420]: W0115 09:41:32.372952    1420 pod_container_deletor.go:77] Container "57413de95cbe349d1915153d16b3edd4b24c95eb00a8bb11bff711a802d95eb4" not found in pod's containers
	Jan 15 09:41:39 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:41:39.531891    1420 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 15 09:41:39 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:41:39.629593    1420 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-4ptlq" (UniqueName: "kubernetes.io/secret/dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7-minikube-ingress-dns-token-4ptlq") pod "kube-ingress-dns-minikube" (UID: "dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7")
	Jan 15 09:41:52 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:41:52.825235    1420 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 15 09:41:52 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:41:52.872194    1420 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-bll5j" (UniqueName: "kubernetes.io/secret/370fe753-c6ef-4033-8f4b-752d94c9c6b6-default-token-bll5j") pod "nginx" (UID: "370fe753-c6ef-4033-8f4b-752d94c9c6b6")
	Jan 15 09:44:14 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:14.628166    1420 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 15 09:44:14 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:14.718133    1420 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-bll5j" (UniqueName: "kubernetes.io/secret/b4f82180-4e96-49cd-8d30-7ef426095af7-default-token-bll5j") pod "hello-world-app-5f5d8b66bb-c4zt2" (UID: "b4f82180-4e96-49cd-8d30-7ef426095af7")
	Jan 15 09:44:16 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:16.233118    1420 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f9a3cca5f83d5ff5a525ca8f291d3c6608e34ebc327f424d8cd93b4ca5c9eddf
	Jan 15 09:44:16 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:16.508769    1420 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f9a3cca5f83d5ff5a525ca8f291d3c6608e34ebc327f424d8cd93b4ca5c9eddf
	Jan 15 09:44:16 ingress-addon-legacy-799339 kubelet[1420]: E0115 09:44:16.509293    1420 remote_runtime.go:295] ContainerStatus "f9a3cca5f83d5ff5a525ca8f291d3c6608e34ebc327f424d8cd93b4ca5c9eddf" from runtime service failed: rpc error: code = NotFound desc = could not find container "f9a3cca5f83d5ff5a525ca8f291d3c6608e34ebc327f424d8cd93b4ca5c9eddf": container with ID starting with f9a3cca5f83d5ff5a525ca8f291d3c6608e34ebc327f424d8cd93b4ca5c9eddf not found: ID does not exist
	Jan 15 09:44:17 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:17.426561    1420 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-4ptlq" (UniqueName: "kubernetes.io/secret/dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7-minikube-ingress-dns-token-4ptlq") pod "dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7" (UID: "dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7")
	Jan 15 09:44:17 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:17.432621    1420 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7-minikube-ingress-dns-token-4ptlq" (OuterVolumeSpecName: "minikube-ingress-dns-token-4ptlq") pod "dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7" (UID: "dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7"). InnerVolumeSpecName "minikube-ingress-dns-token-4ptlq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 09:44:17 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:17.526948    1420 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-4ptlq" (UniqueName: "kubernetes.io/secret/dfcfd2cd-e0d1-462e-9b57-1cc4a38044c7-minikube-ingress-dns-token-4ptlq") on node "ingress-addon-legacy-799339" DevicePath ""
	Jan 15 09:44:28 ingress-addon-legacy-799339 kubelet[1420]: E0115 09:44:28.866612    1420 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-jr6vg.17aa7c1c0eb8bfca", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-jr6vg", UID:"1a2f99ca-e2df-4797-a3be-8b098db5e3ba", APIVersion:"v1", ResourceVersion:"454", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-799339"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1615d7f3352e7ca, ext:211591701922, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1615d7f3352e7ca, ext:211591701922, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-jr6vg.17aa7c1c0eb8bfca" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 15 09:44:28 ingress-addon-legacy-799339 kubelet[1420]: E0115 09:44:28.879868    1420 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-jr6vg.17aa7c1c0eb8bfca", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-jr6vg", UID:"1a2f99ca-e2df-4797-a3be-8b098db5e3ba", APIVersion:"v1", ResourceVersion:"454", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-799339"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1615d7f3352e7ca, ext:211591701922, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1615d7f34382fa5, ext:211606728062, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-jr6vg.17aa7c1c0eb8bfca" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 15 09:44:31 ingress-addon-legacy-799339 kubelet[1420]: W0115 09:44:31.306189    1420 pod_container_deletor.go:77] Container "b1deddf24472e37e153ec985d5da65b19471a8362d4de8c42e0dfa2054c29eac" not found in pod's containers
	Jan 15 09:44:33 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:33.075315    1420 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1a2f99ca-e2df-4797-a3be-8b098db5e3ba-webhook-cert") pod "1a2f99ca-e2df-4797-a3be-8b098db5e3ba" (UID: "1a2f99ca-e2df-4797-a3be-8b098db5e3ba")
	Jan 15 09:44:33 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:33.075361    1420 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-b5b4j" (UniqueName: "kubernetes.io/secret/1a2f99ca-e2df-4797-a3be-8b098db5e3ba-ingress-nginx-token-b5b4j") pod "1a2f99ca-e2df-4797-a3be-8b098db5e3ba" (UID: "1a2f99ca-e2df-4797-a3be-8b098db5e3ba")
	Jan 15 09:44:33 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:33.078346    1420 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a2f99ca-e2df-4797-a3be-8b098db5e3ba-ingress-nginx-token-b5b4j" (OuterVolumeSpecName: "ingress-nginx-token-b5b4j") pod "1a2f99ca-e2df-4797-a3be-8b098db5e3ba" (UID: "1a2f99ca-e2df-4797-a3be-8b098db5e3ba"). InnerVolumeSpecName "ingress-nginx-token-b5b4j". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 09:44:33 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:33.078859    1420 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a2f99ca-e2df-4797-a3be-8b098db5e3ba-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1a2f99ca-e2df-4797-a3be-8b098db5e3ba" (UID: "1a2f99ca-e2df-4797-a3be-8b098db5e3ba"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 09:44:33 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:33.175802    1420 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1a2f99ca-e2df-4797-a3be-8b098db5e3ba-webhook-cert") on node "ingress-addon-legacy-799339" DevicePath ""
	Jan 15 09:44:33 ingress-addon-legacy-799339 kubelet[1420]: I0115 09:44:33.175834    1420 reconciler.go:319] Volume detached for volume "ingress-nginx-token-b5b4j" (UniqueName: "kubernetes.io/secret/1a2f99ca-e2df-4797-a3be-8b098db5e3ba-ingress-nginx-token-b5b4j") on node "ingress-addon-legacy-799339" DevicePath ""
	Jan 15 09:44:33 ingress-addon-legacy-799339 kubelet[1420]: W0115 09:44:33.971160    1420 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/1a2f99ca-e2df-4797-a3be-8b098db5e3ba/volumes" does not exist
	
	
	==> storage-provisioner [d138daf347f9fa4bf2752bda61d1b26a4bf4012b19afa6584c787f12f06374fb] <==
	I0115 09:41:16.769159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 09:41:16.779872       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 09:41:16.779936       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 09:41:16.787283       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 09:41:16.787424       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-799339_0fad1309-3201-40be-aeef-6bd8b08a4a36!
	I0115 09:41:16.791566       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a01900e-3817-4030-95db-c5e58912fc81", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-799339_0fad1309-3201-40be-aeef-6bd8b08a4a36 became leader
	I0115 09:41:16.887755       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-799339_0fad1309-3201-40be-aeef-6bd8b08a4a36!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-799339 -n ingress-addon-legacy-799339
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-799339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (177.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-h2lk5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-h2lk5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-h2lk5 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (200.48877ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-h2lk5): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-pwx96 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-pwx96 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-pwx96 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (185.55754ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-pwx96): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-975382 -n multinode-975382
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-975382 logs -n 25: (1.320002502s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-731501 ssh -- ls                    | mount-start-2-731501 | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC | 15 Jan 24 09:49 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-731501 ssh --                       | mount-start-2-731501 | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC | 15 Jan 24 09:49 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-731501                           | mount-start-2-731501 | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC | 15 Jan 24 09:49 UTC |
	| start   | -p mount-start-2-731501                           | mount-start-2-731501 | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC | 15 Jan 24 09:49 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-731501 | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC |                     |
	|         | --profile mount-start-2-731501                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-731501 ssh -- ls                    | mount-start-2-731501 | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC | 15 Jan 24 09:49 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-731501 ssh --                       | mount-start-2-731501 | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC | 15 Jan 24 09:49 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-731501                           | mount-start-2-731501 | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC | 15 Jan 24 09:49 UTC |
	| delete  | -p mount-start-1-713722                           | mount-start-1-713722 | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC | 15 Jan 24 09:49 UTC |
	| start   | -p multinode-975382                               | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:49 UTC | 15 Jan 24 09:51 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- apply -f                   | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- rollout                    | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- get pods -o                | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- get pods -o                | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | busybox-5bc68d56bd-h2lk5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | busybox-5bc68d56bd-pwx96 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | busybox-5bc68d56bd-h2lk5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | busybox-5bc68d56bd-pwx96 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | busybox-5bc68d56bd-h2lk5 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | busybox-5bc68d56bd-pwx96 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- get pods -o                | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | busybox-5bc68d56bd-h2lk5                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC |                     |
	|         | busybox-5bc68d56bd-h2lk5 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC | 15 Jan 24 09:51 UTC |
	|         | busybox-5bc68d56bd-pwx96                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-975382 -- exec                       | multinode-975382     | jenkins | v1.32.0 | 15 Jan 24 09:51 UTC |                     |
	|         | busybox-5bc68d56bd-pwx96 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:49:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:49:31.209995   26437 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:49:31.210227   26437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:49:31.210235   26437 out.go:309] Setting ErrFile to fd 2...
	I0115 09:49:31.210239   26437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:49:31.210430   26437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 09:49:31.210975   26437 out.go:303] Setting JSON to false
	I0115 09:49:31.211821   26437 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1871,"bootTime":1705310300,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:49:31.211877   26437 start.go:138] virtualization: kvm guest
	I0115 09:49:31.214319   26437 out.go:177] * [multinode-975382] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:49:31.216030   26437 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:49:31.217744   26437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:49:31.216045   26437 notify.go:220] Checking for updates...
	I0115 09:49:31.221021   26437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:49:31.222662   26437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:49:31.224272   26437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:49:31.225662   26437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:49:31.227340   26437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:49:31.262784   26437 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 09:49:31.264163   26437 start.go:298] selected driver: kvm2
	I0115 09:49:31.264175   26437 start.go:902] validating driver "kvm2" against <nil>
	I0115 09:49:31.264188   26437 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:49:31.264874   26437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:49:31.264955   26437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 09:49:31.279247   26437 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 09:49:31.279287   26437 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:49:31.279476   26437 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 09:49:31.279525   26437 cni.go:84] Creating CNI manager for ""
	I0115 09:49:31.279537   26437 cni.go:136] 0 nodes found, recommending kindnet
	I0115 09:49:31.279545   26437 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 09:49:31.279554   26437 start_flags.go:321] config:
	{Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:49:31.279664   26437 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:49:31.283039   26437 out.go:177] * Starting control plane node multinode-975382 in cluster multinode-975382
	I0115 09:49:31.284423   26437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:49:31.284447   26437 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 09:49:31.284454   26437 cache.go:56] Caching tarball of preloaded images
	I0115 09:49:31.284528   26437 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 09:49:31.284538   26437 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 09:49:31.285637   26437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 09:49:31.285685   26437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json: {Name:mkc37151f12494885032311ea90eb94de15e48d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:49:31.286033   26437 start.go:365] acquiring machines lock for multinode-975382: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 09:49:31.286083   26437 start.go:369] acquired machines lock for "multinode-975382" in 29.681µs
	I0115 09:49:31.286100   26437 start.go:93] Provisioning new machine with config: &{Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:49:31.286158   26437 start.go:125] createHost starting for "" (driver="kvm2")
	I0115 09:49:31.288028   26437 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0115 09:49:31.288147   26437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:49:31.288176   26437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:49:31.301903   26437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0115 09:49:31.302285   26437 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:49:31.302799   26437 main.go:141] libmachine: Using API Version  1
	I0115 09:49:31.302817   26437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:49:31.303136   26437 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:49:31.303312   26437 main.go:141] libmachine: (multinode-975382) Calling .GetMachineName
	I0115 09:49:31.303442   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:49:31.303576   26437 start.go:159] libmachine.API.Create for "multinode-975382" (driver="kvm2")
	I0115 09:49:31.303610   26437 client.go:168] LocalClient.Create starting
	I0115 09:49:31.303645   26437 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem
	I0115 09:49:31.303682   26437 main.go:141] libmachine: Decoding PEM data...
	I0115 09:49:31.303699   26437 main.go:141] libmachine: Parsing certificate...
	I0115 09:49:31.303777   26437 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem
	I0115 09:49:31.303803   26437 main.go:141] libmachine: Decoding PEM data...
	I0115 09:49:31.303825   26437 main.go:141] libmachine: Parsing certificate...
	I0115 09:49:31.303850   26437 main.go:141] libmachine: Running pre-create checks...
	I0115 09:49:31.303863   26437 main.go:141] libmachine: (multinode-975382) Calling .PreCreateCheck
	I0115 09:49:31.304168   26437 main.go:141] libmachine: (multinode-975382) Calling .GetConfigRaw
	I0115 09:49:31.304535   26437 main.go:141] libmachine: Creating machine...
	I0115 09:49:31.304564   26437 main.go:141] libmachine: (multinode-975382) Calling .Create
	I0115 09:49:31.304676   26437 main.go:141] libmachine: (multinode-975382) Creating KVM machine...
	I0115 09:49:31.305651   26437 main.go:141] libmachine: (multinode-975382) DBG | found existing default KVM network
	I0115 09:49:31.306314   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:31.306160   26460 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147900}
	I0115 09:49:31.311352   26437 main.go:141] libmachine: (multinode-975382) DBG | trying to create private KVM network mk-multinode-975382 192.168.39.0/24...
	I0115 09:49:31.380457   26437 main.go:141] libmachine: (multinode-975382) DBG | private KVM network mk-multinode-975382 192.168.39.0/24 created
	I0115 09:49:31.380500   26437 main.go:141] libmachine: (multinode-975382) Setting up store path in /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382 ...
	I0115 09:49:31.380514   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:31.380423   26460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:49:31.380537   26437 main.go:141] libmachine: (multinode-975382) Building disk image from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 09:49:31.380567   26437 main.go:141] libmachine: (multinode-975382) Downloading /home/jenkins/minikube-integration/17953-4821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 09:49:31.579477   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:31.579350   26460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa...
	I0115 09:49:31.959558   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:31.959432   26460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/multinode-975382.rawdisk...
	I0115 09:49:31.959611   26437 main.go:141] libmachine: (multinode-975382) DBG | Writing magic tar header
	I0115 09:49:31.959659   26437 main.go:141] libmachine: (multinode-975382) DBG | Writing SSH key tar header
	I0115 09:49:31.959706   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:31.959548   26460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382 ...
	I0115 09:49:31.959721   26437 main.go:141] libmachine: (multinode-975382) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382 (perms=drwx------)
	I0115 09:49:31.959751   26437 main.go:141] libmachine: (multinode-975382) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines (perms=drwxr-xr-x)
	I0115 09:49:31.959771   26437 main.go:141] libmachine: (multinode-975382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382
	I0115 09:49:31.959787   26437 main.go:141] libmachine: (multinode-975382) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube (perms=drwxr-xr-x)
	I0115 09:49:31.959799   26437 main.go:141] libmachine: (multinode-975382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines
	I0115 09:49:31.959809   26437 main.go:141] libmachine: (multinode-975382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:49:31.959820   26437 main.go:141] libmachine: (multinode-975382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821
	I0115 09:49:31.959837   26437 main.go:141] libmachine: (multinode-975382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 09:49:31.959851   26437 main.go:141] libmachine: (multinode-975382) DBG | Checking permissions on dir: /home/jenkins
	I0115 09:49:31.959864   26437 main.go:141] libmachine: (multinode-975382) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821 (perms=drwxrwxr-x)
	I0115 09:49:31.959877   26437 main.go:141] libmachine: (multinode-975382) DBG | Checking permissions on dir: /home
	I0115 09:49:31.959890   26437 main.go:141] libmachine: (multinode-975382) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 09:49:31.959903   26437 main.go:141] libmachine: (multinode-975382) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 09:49:31.959912   26437 main.go:141] libmachine: (multinode-975382) DBG | Skipping /home - not owner
	I0115 09:49:31.959918   26437 main.go:141] libmachine: (multinode-975382) Creating domain...
	I0115 09:49:31.960748   26437 main.go:141] libmachine: (multinode-975382) define libvirt domain using xml: 
	I0115 09:49:31.960772   26437 main.go:141] libmachine: (multinode-975382) <domain type='kvm'>
	I0115 09:49:31.960785   26437 main.go:141] libmachine: (multinode-975382)   <name>multinode-975382</name>
	I0115 09:49:31.960805   26437 main.go:141] libmachine: (multinode-975382)   <memory unit='MiB'>2200</memory>
	I0115 09:49:31.960815   26437 main.go:141] libmachine: (multinode-975382)   <vcpu>2</vcpu>
	I0115 09:49:31.960829   26437 main.go:141] libmachine: (multinode-975382)   <features>
	I0115 09:49:31.960837   26437 main.go:141] libmachine: (multinode-975382)     <acpi/>
	I0115 09:49:31.960842   26437 main.go:141] libmachine: (multinode-975382)     <apic/>
	I0115 09:49:31.960848   26437 main.go:141] libmachine: (multinode-975382)     <pae/>
	I0115 09:49:31.960853   26437 main.go:141] libmachine: (multinode-975382)     
	I0115 09:49:31.960860   26437 main.go:141] libmachine: (multinode-975382)   </features>
	I0115 09:49:31.960865   26437 main.go:141] libmachine: (multinode-975382)   <cpu mode='host-passthrough'>
	I0115 09:49:31.960889   26437 main.go:141] libmachine: (multinode-975382)   
	I0115 09:49:31.960912   26437 main.go:141] libmachine: (multinode-975382)   </cpu>
	I0115 09:49:31.960924   26437 main.go:141] libmachine: (multinode-975382)   <os>
	I0115 09:49:31.960940   26437 main.go:141] libmachine: (multinode-975382)     <type>hvm</type>
	I0115 09:49:31.960954   26437 main.go:141] libmachine: (multinode-975382)     <boot dev='cdrom'/>
	I0115 09:49:31.960966   26437 main.go:141] libmachine: (multinode-975382)     <boot dev='hd'/>
	I0115 09:49:31.960986   26437 main.go:141] libmachine: (multinode-975382)     <bootmenu enable='no'/>
	I0115 09:49:31.961005   26437 main.go:141] libmachine: (multinode-975382)   </os>
	I0115 09:49:31.961019   26437 main.go:141] libmachine: (multinode-975382)   <devices>
	I0115 09:49:31.961028   26437 main.go:141] libmachine: (multinode-975382)     <disk type='file' device='cdrom'>
	I0115 09:49:31.961040   26437 main.go:141] libmachine: (multinode-975382)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/boot2docker.iso'/>
	I0115 09:49:31.961048   26437 main.go:141] libmachine: (multinode-975382)       <target dev='hdc' bus='scsi'/>
	I0115 09:49:31.961055   26437 main.go:141] libmachine: (multinode-975382)       <readonly/>
	I0115 09:49:31.961063   26437 main.go:141] libmachine: (multinode-975382)     </disk>
	I0115 09:49:31.961075   26437 main.go:141] libmachine: (multinode-975382)     <disk type='file' device='disk'>
	I0115 09:49:31.961092   26437 main.go:141] libmachine: (multinode-975382)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 09:49:31.961111   26437 main.go:141] libmachine: (multinode-975382)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/multinode-975382.rawdisk'/>
	I0115 09:49:31.961125   26437 main.go:141] libmachine: (multinode-975382)       <target dev='hda' bus='virtio'/>
	I0115 09:49:31.961131   26437 main.go:141] libmachine: (multinode-975382)     </disk>
	I0115 09:49:31.961139   26437 main.go:141] libmachine: (multinode-975382)     <interface type='network'>
	I0115 09:49:31.961149   26437 main.go:141] libmachine: (multinode-975382)       <source network='mk-multinode-975382'/>
	I0115 09:49:31.961162   26437 main.go:141] libmachine: (multinode-975382)       <model type='virtio'/>
	I0115 09:49:31.961175   26437 main.go:141] libmachine: (multinode-975382)     </interface>
	I0115 09:49:31.961190   26437 main.go:141] libmachine: (multinode-975382)     <interface type='network'>
	I0115 09:49:31.961202   26437 main.go:141] libmachine: (multinode-975382)       <source network='default'/>
	I0115 09:49:31.961215   26437 main.go:141] libmachine: (multinode-975382)       <model type='virtio'/>
	I0115 09:49:31.961221   26437 main.go:141] libmachine: (multinode-975382)     </interface>
	I0115 09:49:31.961230   26437 main.go:141] libmachine: (multinode-975382)     <serial type='pty'>
	I0115 09:49:31.961235   26437 main.go:141] libmachine: (multinode-975382)       <target port='0'/>
	I0115 09:49:31.961254   26437 main.go:141] libmachine: (multinode-975382)     </serial>
	I0115 09:49:31.961270   26437 main.go:141] libmachine: (multinode-975382)     <console type='pty'>
	I0115 09:49:31.961287   26437 main.go:141] libmachine: (multinode-975382)       <target type='serial' port='0'/>
	I0115 09:49:31.961298   26437 main.go:141] libmachine: (multinode-975382)     </console>
	I0115 09:49:31.961305   26437 main.go:141] libmachine: (multinode-975382)     <rng model='virtio'>
	I0115 09:49:31.961318   26437 main.go:141] libmachine: (multinode-975382)       <backend model='random'>/dev/random</backend>
	I0115 09:49:31.961332   26437 main.go:141] libmachine: (multinode-975382)     </rng>
	I0115 09:49:31.961347   26437 main.go:141] libmachine: (multinode-975382)     
	I0115 09:49:31.961360   26437 main.go:141] libmachine: (multinode-975382)     
	I0115 09:49:31.961372   26437 main.go:141] libmachine: (multinode-975382)   </devices>
	I0115 09:49:31.961385   26437 main.go:141] libmachine: (multinode-975382) </domain>
	I0115 09:49:31.961391   26437 main.go:141] libmachine: (multinode-975382) 
	I0115 09:49:31.965803   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:a4:a1:80 in network default
	I0115 09:49:31.966320   26437 main.go:141] libmachine: (multinode-975382) Ensuring networks are active...
	I0115 09:49:31.966346   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:31.966913   26437 main.go:141] libmachine: (multinode-975382) Ensuring network default is active
	I0115 09:49:31.967201   26437 main.go:141] libmachine: (multinode-975382) Ensuring network mk-multinode-975382 is active
	I0115 09:49:31.967740   26437 main.go:141] libmachine: (multinode-975382) Getting domain xml...
	I0115 09:49:31.968381   26437 main.go:141] libmachine: (multinode-975382) Creating domain...
	I0115 09:49:33.132110   26437 main.go:141] libmachine: (multinode-975382) Waiting to get IP...
	I0115 09:49:33.132934   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:33.133309   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:33.133361   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:33.133304   26460 retry.go:31] will retry after 298.21054ms: waiting for machine to come up
	I0115 09:49:33.432735   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:33.433238   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:33.433268   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:33.433183   26460 retry.go:31] will retry after 257.821825ms: waiting for machine to come up
	I0115 09:49:33.692557   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:33.692995   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:33.693025   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:33.692948   26460 retry.go:31] will retry after 319.470017ms: waiting for machine to come up
	I0115 09:49:34.014528   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:34.014917   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:34.014944   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:34.014880   26460 retry.go:31] will retry after 472.282123ms: waiting for machine to come up
	I0115 09:49:34.488241   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:34.488720   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:34.488763   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:34.488675   26460 retry.go:31] will retry after 681.597608ms: waiting for machine to come up
	I0115 09:49:35.171705   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:35.172280   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:35.172299   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:35.172250   26460 retry.go:31] will retry after 618.541558ms: waiting for machine to come up
	I0115 09:49:35.792007   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:35.792459   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:35.792491   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:35.792444   26460 retry.go:31] will retry after 1.078899598s: waiting for machine to come up
	I0115 09:49:36.873113   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:36.873479   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:36.873515   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:36.873454   26460 retry.go:31] will retry after 1.325701658s: waiting for machine to come up
	I0115 09:49:38.200841   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:38.201252   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:38.201279   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:38.201211   26460 retry.go:31] will retry after 1.388534885s: waiting for machine to come up
	I0115 09:49:39.591612   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:39.591973   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:39.592000   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:39.591937   26460 retry.go:31] will retry after 2.132419023s: waiting for machine to come up
	I0115 09:49:41.726034   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:41.726440   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:41.726472   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:41.726369   26460 retry.go:31] will retry after 2.070731308s: waiting for machine to come up
	I0115 09:49:43.799470   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:43.799865   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:43.799893   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:43.799816   26460 retry.go:31] will retry after 2.445237705s: waiting for machine to come up
	I0115 09:49:46.246113   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:46.246519   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:46.246543   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:46.246459   26460 retry.go:31] will retry after 3.060653904s: waiting for machine to come up
	I0115 09:49:49.310858   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:49.311193   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:49:49.311223   26437 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:49:49.311148   26460 retry.go:31] will retry after 5.102047311s: waiting for machine to come up
	I0115 09:49:54.415366   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.415755   26437 main.go:141] libmachine: (multinode-975382) Found IP for machine: 192.168.39.217
	I0115 09:49:54.415779   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has current primary IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.415786   26437 main.go:141] libmachine: (multinode-975382) Reserving static IP address...
	I0115 09:49:54.416110   26437 main.go:141] libmachine: (multinode-975382) DBG | unable to find host DHCP lease matching {name: "multinode-975382", mac: "52:54:00:39:66:0a", ip: "192.168.39.217"} in network mk-multinode-975382
	I0115 09:49:54.485261   26437 main.go:141] libmachine: (multinode-975382) DBG | Getting to WaitForSSH function...
	I0115 09:49:54.485298   26437 main.go:141] libmachine: (multinode-975382) Reserved static IP address: 192.168.39.217
	I0115 09:49:54.485312   26437 main.go:141] libmachine: (multinode-975382) Waiting for SSH to be available...
	I0115 09:49:54.487751   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.488078   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:54.488101   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.488254   26437 main.go:141] libmachine: (multinode-975382) DBG | Using SSH client type: external
	I0115 09:49:54.488281   26437 main.go:141] libmachine: (multinode-975382) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa (-rw-------)
	I0115 09:49:54.488311   26437 main.go:141] libmachine: (multinode-975382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 09:49:54.488327   26437 main.go:141] libmachine: (multinode-975382) DBG | About to run SSH command:
	I0115 09:49:54.488343   26437 main.go:141] libmachine: (multinode-975382) DBG | exit 0
	I0115 09:49:54.577875   26437 main.go:141] libmachine: (multinode-975382) DBG | SSH cmd err, output: <nil>: 
	I0115 09:49:54.578178   26437 main.go:141] libmachine: (multinode-975382) KVM machine creation complete!
	I0115 09:49:54.578494   26437 main.go:141] libmachine: (multinode-975382) Calling .GetConfigRaw
	I0115 09:49:54.578999   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:49:54.579221   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:49:54.579357   26437 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 09:49:54.579374   26437 main.go:141] libmachine: (multinode-975382) Calling .GetState
	I0115 09:49:54.580596   26437 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 09:49:54.580610   26437 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 09:49:54.580615   26437 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 09:49:54.580622   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:54.582427   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.582737   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:54.582765   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.582842   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:54.583017   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:54.583148   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:54.583289   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:54.583502   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:49:54.583859   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 09:49:54.583874   26437 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 09:49:54.705527   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:49:54.705554   26437 main.go:141] libmachine: Detecting the provisioner...
	I0115 09:49:54.705565   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:54.708216   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.708584   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:54.708615   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.708721   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:54.708897   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:54.709063   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:54.709208   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:54.709352   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:49:54.709674   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 09:49:54.709687   26437 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 09:49:54.830926   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 09:49:54.831004   26437 main.go:141] libmachine: found compatible host: buildroot
	I0115 09:49:54.831020   26437 main.go:141] libmachine: Provisioning with buildroot...
	I0115 09:49:54.831032   26437 main.go:141] libmachine: (multinode-975382) Calling .GetMachineName
	I0115 09:49:54.831261   26437 buildroot.go:166] provisioning hostname "multinode-975382"
	I0115 09:49:54.831285   26437 main.go:141] libmachine: (multinode-975382) Calling .GetMachineName
	I0115 09:49:54.831447   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:54.833891   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.834235   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:54.834267   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.834351   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:54.834534   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:54.834665   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:54.834801   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:54.834967   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:49:54.835394   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 09:49:54.835413   26437 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-975382 && echo "multinode-975382" | sudo tee /etc/hostname
	I0115 09:49:54.966781   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-975382
	
	I0115 09:49:54.966809   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:54.969515   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.969932   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:54.969959   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:54.970179   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:54.970357   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:54.970579   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:54.970709   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:54.970912   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:49:54.971244   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 09:49:54.971266   26437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-975382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-975382/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-975382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 09:49:55.104027   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:49:55.104058   26437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 09:49:55.104098   26437 buildroot.go:174] setting up certificates
	I0115 09:49:55.104109   26437 provision.go:83] configureAuth start
	I0115 09:49:55.104121   26437 main.go:141] libmachine: (multinode-975382) Calling .GetMachineName
	I0115 09:49:55.104426   26437 main.go:141] libmachine: (multinode-975382) Calling .GetIP
	I0115 09:49:55.106952   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.107391   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.107421   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.107533   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:55.109570   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.109903   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.109936   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.110018   26437 provision.go:138] copyHostCerts
	I0115 09:49:55.110044   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 09:49:55.110072   26437 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 09:49:55.110080   26437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 09:49:55.110126   26437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 09:49:55.110195   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 09:49:55.110223   26437 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 09:49:55.110237   26437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 09:49:55.110255   26437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 09:49:55.110298   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 09:49:55.110313   26437 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 09:49:55.110319   26437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 09:49:55.110336   26437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 09:49:55.110377   26437 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.multinode-975382 san=[192.168.39.217 192.168.39.217 localhost 127.0.0.1 minikube multinode-975382]
	I0115 09:49:55.211844   26437 provision.go:172] copyRemoteCerts
	I0115 09:49:55.211896   26437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 09:49:55.211918   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:55.214408   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.214712   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.214733   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.214891   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:55.215064   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:55.215213   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:55.215317   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 09:49:55.303169   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 09:49:55.303241   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0115 09:49:55.325666   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 09:49:55.325732   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 09:49:55.347933   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 09:49:55.348006   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 09:49:55.370163   26437 provision.go:86] duration metric: configureAuth took 266.042914ms
	I0115 09:49:55.370189   26437 buildroot.go:189] setting minikube options for container-runtime
	I0115 09:49:55.370353   26437 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:49:55.370438   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:55.372970   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.373288   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.373318   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.373488   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:55.373671   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:55.373836   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:55.373988   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:55.374124   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:49:55.374445   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 09:49:55.374460   26437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 09:49:55.695263   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 09:49:55.695287   26437 main.go:141] libmachine: Checking connection to Docker...
	I0115 09:49:55.695295   26437 main.go:141] libmachine: (multinode-975382) Calling .GetURL
	I0115 09:49:55.696529   26437 main.go:141] libmachine: (multinode-975382) DBG | Using libvirt version 6000000
	I0115 09:49:55.699307   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.699656   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.699685   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.699838   26437 main.go:141] libmachine: Docker is up and running!
	I0115 09:49:55.699854   26437 main.go:141] libmachine: Reticulating splines...
	I0115 09:49:55.699859   26437 client.go:171] LocalClient.Create took 24.396240788s
	I0115 09:49:55.699878   26437 start.go:167] duration metric: libmachine.API.Create for "multinode-975382" took 24.396304357s
	I0115 09:49:55.699887   26437 start.go:300] post-start starting for "multinode-975382" (driver="kvm2")
	I0115 09:49:55.699896   26437 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 09:49:55.699911   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:49:55.700204   26437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 09:49:55.700226   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:55.702614   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.702856   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.702882   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.703003   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:55.703178   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:55.703346   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:55.703491   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 09:49:55.793068   26437 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 09:49:55.797167   26437 command_runner.go:130] > NAME=Buildroot
	I0115 09:49:55.797203   26437 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0115 09:49:55.797211   26437 command_runner.go:130] > ID=buildroot
	I0115 09:49:55.797218   26437 command_runner.go:130] > VERSION_ID=2021.02.12
	I0115 09:49:55.797226   26437 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0115 09:49:55.797328   26437 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 09:49:55.797346   26437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 09:49:55.797424   26437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 09:49:55.797522   26437 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 09:49:55.797535   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /etc/ssl/certs/134822.pem
	I0115 09:49:55.797640   26437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 09:49:55.806618   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 09:49:55.827834   26437 start.go:303] post-start completed in 127.936856ms
	I0115 09:49:55.827880   26437 main.go:141] libmachine: (multinode-975382) Calling .GetConfigRaw
	I0115 09:49:55.828438   26437 main.go:141] libmachine: (multinode-975382) Calling .GetIP
	I0115 09:49:55.830791   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.831130   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.831160   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.831401   26437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 09:49:55.831610   26437 start.go:128] duration metric: createHost completed in 24.545444362s
	I0115 09:49:55.831635   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:55.833648   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.833972   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.834003   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.834153   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:55.834329   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:55.834503   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:55.834646   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:55.834805   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:49:55.835252   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 09:49:55.835267   26437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 09:49:55.955009   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705312195.923503318
	
	I0115 09:49:55.955031   26437 fix.go:206] guest clock: 1705312195.923503318
	I0115 09:49:55.955039   26437 fix.go:219] Guest: 2024-01-15 09:49:55.923503318 +0000 UTC Remote: 2024-01-15 09:49:55.831621921 +0000 UTC m=+24.667697471 (delta=91.881397ms)
	I0115 09:49:55.955084   26437 fix.go:190] guest clock delta is within tolerance: 91.881397ms
	I0115 09:49:55.955092   26437 start.go:83] releasing machines lock for "multinode-975382", held for 24.669000628s
	I0115 09:49:55.955115   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:49:55.955402   26437 main.go:141] libmachine: (multinode-975382) Calling .GetIP
	I0115 09:49:55.957890   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.958186   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.958207   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.958401   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:49:55.959003   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:49:55.959181   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:49:55.959265   26437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 09:49:55.959310   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:55.959442   26437 ssh_runner.go:195] Run: cat /version.json
	I0115 09:49:55.959474   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:49:55.961804   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.961990   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.962150   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.962175   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.962316   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:55.962392   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:55.962432   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:55.962489   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:55.962570   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:49:55.962643   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:55.962714   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:49:55.962790   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 09:49:55.962837   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:49:55.962948   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 09:49:56.071435   26437 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0115 09:49:56.071500   26437 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0115 09:49:56.071606   26437 ssh_runner.go:195] Run: systemctl --version
	I0115 09:49:56.077006   26437 command_runner.go:130] > systemd 247 (247)
	I0115 09:49:56.077033   26437 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0115 09:49:56.077090   26437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 09:49:56.228027   26437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 09:49:56.233967   26437 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0115 09:49:56.234457   26437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 09:49:56.234509   26437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:49:56.249121   26437 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0115 09:49:56.249155   26437 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 09:49:56.249162   26437 start.go:475] detecting cgroup driver to use...
	I0115 09:49:56.249222   26437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 09:49:56.262939   26437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 09:49:56.274385   26437 docker.go:217] disabling cri-docker service (if available) ...
	I0115 09:49:56.274456   26437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 09:49:56.286705   26437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 09:49:56.298481   26437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 09:49:56.311737   26437 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0115 09:49:56.403765   26437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 09:49:56.417333   26437 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0115 09:49:56.525640   26437 docker.go:233] disabling docker service ...
	I0115 09:49:56.525694   26437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 09:49:56.539032   26437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 09:49:56.550540   26437 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0115 09:49:56.550620   26437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 09:49:56.656782   26437 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0115 09:49:56.656873   26437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 09:49:56.668587   26437 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0115 09:49:56.668871   26437 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0115 09:49:56.772807   26437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 09:49:56.785164   26437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 09:49:56.801485   26437 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0115 09:49:56.801527   26437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 09:49:56.801573   26437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:49:56.810106   26437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 09:49:56.810162   26437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:49:56.818568   26437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:49:56.827161   26437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:49:56.835721   26437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 09:49:56.844882   26437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 09:49:56.852392   26437 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 09:49:56.852557   26437 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 09:49:56.852591   26437 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 09:49:56.863623   26437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 09:49:56.872369   26437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 09:49:56.988492   26437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 09:49:57.154222   26437 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 09:49:57.154279   26437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 09:49:57.162266   26437 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0115 09:49:57.162285   26437 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0115 09:49:57.162291   26437 command_runner.go:130] > Device: 16h/22d	Inode: 737         Links: 1
	I0115 09:49:57.162298   26437 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:49:57.162302   26437 command_runner.go:130] > Access: 2024-01-15 09:49:57.111965187 +0000
	I0115 09:49:57.162308   26437 command_runner.go:130] > Modify: 2024-01-15 09:49:57.111965187 +0000
	I0115 09:49:57.162314   26437 command_runner.go:130] > Change: 2024-01-15 09:49:57.111965187 +0000
	I0115 09:49:57.162326   26437 command_runner.go:130] >  Birth: -
	I0115 09:49:57.162876   26437 start.go:543] Will wait 60s for crictl version
	I0115 09:49:57.162914   26437 ssh_runner.go:195] Run: which crictl
	I0115 09:49:57.166310   26437 command_runner.go:130] > /usr/bin/crictl
	I0115 09:49:57.166473   26437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 09:49:57.212537   26437 command_runner.go:130] > Version:  0.1.0
	I0115 09:49:57.212565   26437 command_runner.go:130] > RuntimeName:  cri-o
	I0115 09:49:57.212573   26437 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0115 09:49:57.212587   26437 command_runner.go:130] > RuntimeApiVersion:  v1
	I0115 09:49:57.213927   26437 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 09:49:57.213994   26437 ssh_runner.go:195] Run: crio --version
	I0115 09:49:57.261454   26437 command_runner.go:130] > crio version 1.24.1
	I0115 09:49:57.261472   26437 command_runner.go:130] > Version:          1.24.1
	I0115 09:49:57.261479   26437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 09:49:57.261483   26437 command_runner.go:130] > GitTreeState:     dirty
	I0115 09:49:57.261489   26437 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 09:49:57.261493   26437 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 09:49:57.261497   26437 command_runner.go:130] > Compiler:         gc
	I0115 09:49:57.261502   26437 command_runner.go:130] > Platform:         linux/amd64
	I0115 09:49:57.261507   26437 command_runner.go:130] > Linkmode:         dynamic
	I0115 09:49:57.261513   26437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 09:49:57.261517   26437 command_runner.go:130] > SeccompEnabled:   true
	I0115 09:49:57.261522   26437 command_runner.go:130] > AppArmorEnabled:  false
	I0115 09:49:57.262891   26437 ssh_runner.go:195] Run: crio --version
	I0115 09:49:57.317643   26437 command_runner.go:130] > crio version 1.24.1
	I0115 09:49:57.317661   26437 command_runner.go:130] > Version:          1.24.1
	I0115 09:49:57.317668   26437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 09:49:57.317672   26437 command_runner.go:130] > GitTreeState:     dirty
	I0115 09:49:57.317678   26437 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 09:49:57.317682   26437 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 09:49:57.317687   26437 command_runner.go:130] > Compiler:         gc
	I0115 09:49:57.317694   26437 command_runner.go:130] > Platform:         linux/amd64
	I0115 09:49:57.317702   26437 command_runner.go:130] > Linkmode:         dynamic
	I0115 09:49:57.317714   26437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 09:49:57.317724   26437 command_runner.go:130] > SeccompEnabled:   true
	I0115 09:49:57.317730   26437 command_runner.go:130] > AppArmorEnabled:  false
	I0115 09:49:57.320800   26437 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 09:49:57.322333   26437 main.go:141] libmachine: (multinode-975382) Calling .GetIP
	I0115 09:49:57.324654   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:57.324986   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:49:57.325015   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:49:57.325193   26437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 09:49:57.329067   26437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:49:57.340418   26437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:49:57.340482   26437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:49:57.374650   26437 command_runner.go:130] > {
	I0115 09:49:57.374672   26437 command_runner.go:130] >   "images": [
	I0115 09:49:57.374678   26437 command_runner.go:130] >   ]
	I0115 09:49:57.374684   26437 command_runner.go:130] > }
	I0115 09:49:57.374809   26437 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 09:49:57.374857   26437 ssh_runner.go:195] Run: which lz4
	I0115 09:49:57.378447   26437 command_runner.go:130] > /usr/bin/lz4
	I0115 09:49:57.378484   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0115 09:49:57.378556   26437 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 09:49:57.382126   26437 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 09:49:57.382361   26437 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 09:49:57.382405   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 09:49:59.105528   26437 crio.go:444] Took 1.726990 seconds to copy over tarball
	I0115 09:49:59.105596   26437 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 09:50:01.993288   26437 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.88766407s)
	I0115 09:50:01.993314   26437 crio.go:451] Took 2.887759 seconds to extract the tarball
	I0115 09:50:01.993326   26437 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 09:50:02.033982   26437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:50:02.112727   26437 command_runner.go:130] > {
	I0115 09:50:02.112749   26437 command_runner.go:130] >   "images": [
	I0115 09:50:02.112753   26437 command_runner.go:130] >     {
	I0115 09:50:02.112761   26437 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0115 09:50:02.112765   26437 command_runner.go:130] >       "repoTags": [
	I0115 09:50:02.112772   26437 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0115 09:50:02.112776   26437 command_runner.go:130] >       ],
	I0115 09:50:02.112780   26437 command_runner.go:130] >       "repoDigests": [
	I0115 09:50:02.112799   26437 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0115 09:50:02.112811   26437 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0115 09:50:02.112817   26437 command_runner.go:130] >       ],
	I0115 09:50:02.112825   26437 command_runner.go:130] >       "size": "65258016",
	I0115 09:50:02.112833   26437 command_runner.go:130] >       "uid": null,
	I0115 09:50:02.112841   26437 command_runner.go:130] >       "username": "",
	I0115 09:50:02.112852   26437 command_runner.go:130] >       "spec": null,
	I0115 09:50:02.112857   26437 command_runner.go:130] >       "pinned": false
	I0115 09:50:02.112864   26437 command_runner.go:130] >     },
	I0115 09:50:02.112867   26437 command_runner.go:130] >     {
	I0115 09:50:02.112873   26437 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0115 09:50:02.112878   26437 command_runner.go:130] >       "repoTags": [
	I0115 09:50:02.112883   26437 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0115 09:50:02.112888   26437 command_runner.go:130] >       ],
	I0115 09:50:02.112895   26437 command_runner.go:130] >       "repoDigests": [
	I0115 09:50:02.112905   26437 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0115 09:50:02.112912   26437 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0115 09:50:02.112919   26437 command_runner.go:130] >       ],
	I0115 09:50:02.112925   26437 command_runner.go:130] >       "size": "31470524",
	I0115 09:50:02.112930   26437 command_runner.go:130] >       "uid": null,
	I0115 09:50:02.112934   26437 command_runner.go:130] >       "username": "",
	I0115 09:50:02.112941   26437 command_runner.go:130] >       "spec": null,
	I0115 09:50:02.112945   26437 command_runner.go:130] >       "pinned": false
	I0115 09:50:02.112959   26437 command_runner.go:130] >     },
	I0115 09:50:02.112963   26437 command_runner.go:130] >     {
	I0115 09:50:02.112969   26437 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0115 09:50:02.112976   26437 command_runner.go:130] >       "repoTags": [
	I0115 09:50:02.112981   26437 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0115 09:50:02.112987   26437 command_runner.go:130] >       ],
	I0115 09:50:02.112992   26437 command_runner.go:130] >       "repoDigests": [
	I0115 09:50:02.113001   26437 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0115 09:50:02.113009   26437 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0115 09:50:02.113018   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113022   26437 command_runner.go:130] >       "size": "53621675",
	I0115 09:50:02.113027   26437 command_runner.go:130] >       "uid": null,
	I0115 09:50:02.113031   26437 command_runner.go:130] >       "username": "",
	I0115 09:50:02.113038   26437 command_runner.go:130] >       "spec": null,
	I0115 09:50:02.113042   26437 command_runner.go:130] >       "pinned": false
	I0115 09:50:02.113046   26437 command_runner.go:130] >     },
	I0115 09:50:02.113049   26437 command_runner.go:130] >     {
	I0115 09:50:02.113055   26437 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0115 09:50:02.113062   26437 command_runner.go:130] >       "repoTags": [
	I0115 09:50:02.113067   26437 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0115 09:50:02.113073   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113077   26437 command_runner.go:130] >       "repoDigests": [
	I0115 09:50:02.113084   26437 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0115 09:50:02.113093   26437 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0115 09:50:02.113101   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113108   26437 command_runner.go:130] >       "size": "295456551",
	I0115 09:50:02.113112   26437 command_runner.go:130] >       "uid": {
	I0115 09:50:02.113119   26437 command_runner.go:130] >         "value": "0"
	I0115 09:50:02.113123   26437 command_runner.go:130] >       },
	I0115 09:50:02.113129   26437 command_runner.go:130] >       "username": "",
	I0115 09:50:02.113133   26437 command_runner.go:130] >       "spec": null,
	I0115 09:50:02.113140   26437 command_runner.go:130] >       "pinned": false
	I0115 09:50:02.113143   26437 command_runner.go:130] >     },
	I0115 09:50:02.113147   26437 command_runner.go:130] >     {
	I0115 09:50:02.113153   26437 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0115 09:50:02.113160   26437 command_runner.go:130] >       "repoTags": [
	I0115 09:50:02.113165   26437 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0115 09:50:02.113171   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113175   26437 command_runner.go:130] >       "repoDigests": [
	I0115 09:50:02.113182   26437 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0115 09:50:02.113192   26437 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0115 09:50:02.113197   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113204   26437 command_runner.go:130] >       "size": "127226832",
	I0115 09:50:02.113208   26437 command_runner.go:130] >       "uid": {
	I0115 09:50:02.113215   26437 command_runner.go:130] >         "value": "0"
	I0115 09:50:02.113219   26437 command_runner.go:130] >       },
	I0115 09:50:02.113223   26437 command_runner.go:130] >       "username": "",
	I0115 09:50:02.113229   26437 command_runner.go:130] >       "spec": null,
	I0115 09:50:02.113233   26437 command_runner.go:130] >       "pinned": false
	I0115 09:50:02.113237   26437 command_runner.go:130] >     },
	I0115 09:50:02.113241   26437 command_runner.go:130] >     {
	I0115 09:50:02.113249   26437 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0115 09:50:02.113253   26437 command_runner.go:130] >       "repoTags": [
	I0115 09:50:02.113262   26437 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0115 09:50:02.113266   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113270   26437 command_runner.go:130] >       "repoDigests": [
	I0115 09:50:02.113277   26437 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0115 09:50:02.113287   26437 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0115 09:50:02.113293   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113298   26437 command_runner.go:130] >       "size": "123261750",
	I0115 09:50:02.113302   26437 command_runner.go:130] >       "uid": {
	I0115 09:50:02.113308   26437 command_runner.go:130] >         "value": "0"
	I0115 09:50:02.113312   26437 command_runner.go:130] >       },
	I0115 09:50:02.113319   26437 command_runner.go:130] >       "username": "",
	I0115 09:50:02.113323   26437 command_runner.go:130] >       "spec": null,
	I0115 09:50:02.113329   26437 command_runner.go:130] >       "pinned": false
	I0115 09:50:02.113333   26437 command_runner.go:130] >     },
	I0115 09:50:02.113339   26437 command_runner.go:130] >     {
	I0115 09:50:02.113344   26437 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0115 09:50:02.113349   26437 command_runner.go:130] >       "repoTags": [
	I0115 09:50:02.113354   26437 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0115 09:50:02.113357   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113363   26437 command_runner.go:130] >       "repoDigests": [
	I0115 09:50:02.113371   26437 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0115 09:50:02.113379   26437 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0115 09:50:02.113383   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113388   26437 command_runner.go:130] >       "size": "74749335",
	I0115 09:50:02.113394   26437 command_runner.go:130] >       "uid": null,
	I0115 09:50:02.113399   26437 command_runner.go:130] >       "username": "",
	I0115 09:50:02.113406   26437 command_runner.go:130] >       "spec": null,
	I0115 09:50:02.113410   26437 command_runner.go:130] >       "pinned": false
	I0115 09:50:02.113416   26437 command_runner.go:130] >     },
	I0115 09:50:02.113420   26437 command_runner.go:130] >     {
	I0115 09:50:02.113429   26437 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0115 09:50:02.113433   26437 command_runner.go:130] >       "repoTags": [
	I0115 09:50:02.113442   26437 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0115 09:50:02.113446   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113450   26437 command_runner.go:130] >       "repoDigests": [
	I0115 09:50:02.113463   26437 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0115 09:50:02.113472   26437 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0115 09:50:02.113476   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113480   26437 command_runner.go:130] >       "size": "61551410",
	I0115 09:50:02.113487   26437 command_runner.go:130] >       "uid": {
	I0115 09:50:02.113491   26437 command_runner.go:130] >         "value": "0"
	I0115 09:50:02.113494   26437 command_runner.go:130] >       },
	I0115 09:50:02.113500   26437 command_runner.go:130] >       "username": "",
	I0115 09:50:02.113504   26437 command_runner.go:130] >       "spec": null,
	I0115 09:50:02.113511   26437 command_runner.go:130] >       "pinned": false
	I0115 09:50:02.113515   26437 command_runner.go:130] >     },
	I0115 09:50:02.113519   26437 command_runner.go:130] >     {
	I0115 09:50:02.113527   26437 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0115 09:50:02.113531   26437 command_runner.go:130] >       "repoTags": [
	I0115 09:50:02.113538   26437 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0115 09:50:02.113542   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113546   26437 command_runner.go:130] >       "repoDigests": [
	I0115 09:50:02.113553   26437 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0115 09:50:02.113562   26437 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0115 09:50:02.113566   26437 command_runner.go:130] >       ],
	I0115 09:50:02.113570   26437 command_runner.go:130] >       "size": "750414",
	I0115 09:50:02.113575   26437 command_runner.go:130] >       "uid": {
	I0115 09:50:02.113580   26437 command_runner.go:130] >         "value": "65535"
	I0115 09:50:02.113584   26437 command_runner.go:130] >       },
	I0115 09:50:02.113590   26437 command_runner.go:130] >       "username": "",
	I0115 09:50:02.113594   26437 command_runner.go:130] >       "spec": null,
	I0115 09:50:02.113600   26437 command_runner.go:130] >       "pinned": false
	I0115 09:50:02.113604   26437 command_runner.go:130] >     }
	I0115 09:50:02.113607   26437 command_runner.go:130] >   ]
	I0115 09:50:02.113610   26437 command_runner.go:130] > }
	I0115 09:50:02.113707   26437 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 09:50:02.113716   26437 cache_images.go:84] Images are preloaded, skipping loading
	I0115 09:50:02.113771   26437 ssh_runner.go:195] Run: crio config
	I0115 09:50:02.163734   26437 command_runner.go:130] ! time="2024-01-15 09:50:02.141723982Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0115 09:50:02.163775   26437 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0115 09:50:02.172701   26437 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0115 09:50:02.172727   26437 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0115 09:50:02.172737   26437 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0115 09:50:02.172743   26437 command_runner.go:130] > #
	I0115 09:50:02.172755   26437 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0115 09:50:02.172766   26437 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0115 09:50:02.172783   26437 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0115 09:50:02.172793   26437 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0115 09:50:02.172803   26437 command_runner.go:130] > # reload'.
	I0115 09:50:02.172813   26437 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0115 09:50:02.172826   26437 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0115 09:50:02.172840   26437 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0115 09:50:02.172852   26437 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0115 09:50:02.172860   26437 command_runner.go:130] > [crio]
	I0115 09:50:02.172871   26437 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0115 09:50:02.172879   26437 command_runner.go:130] > # containers images, in this directory.
	I0115 09:50:02.172887   26437 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0115 09:50:02.172900   26437 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0115 09:50:02.172911   26437 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0115 09:50:02.172923   26437 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0115 09:50:02.172935   26437 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0115 09:50:02.172945   26437 command_runner.go:130] > storage_driver = "overlay"
	I0115 09:50:02.172957   26437 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0115 09:50:02.172970   26437 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0115 09:50:02.172980   26437 command_runner.go:130] > storage_option = [
	I0115 09:50:02.172988   26437 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0115 09:50:02.172997   26437 command_runner.go:130] > ]
	I0115 09:50:02.173007   26437 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0115 09:50:02.173016   26437 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0115 09:50:02.173020   26437 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0115 09:50:02.173026   26437 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0115 09:50:02.173032   26437 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0115 09:50:02.173036   26437 command_runner.go:130] > # always happen on a node reboot
	I0115 09:50:02.173041   26437 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0115 09:50:02.173046   26437 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0115 09:50:02.173054   26437 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0115 09:50:02.173062   26437 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0115 09:50:02.173069   26437 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0115 09:50:02.173077   26437 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0115 09:50:02.173088   26437 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0115 09:50:02.173094   26437 command_runner.go:130] > # internal_wipe = true
	I0115 09:50:02.173100   26437 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0115 09:50:02.173108   26437 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0115 09:50:02.173114   26437 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0115 09:50:02.173122   26437 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0115 09:50:02.173128   26437 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0115 09:50:02.173134   26437 command_runner.go:130] > [crio.api]
	I0115 09:50:02.173140   26437 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0115 09:50:02.173147   26437 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0115 09:50:02.173153   26437 command_runner.go:130] > # IP address on which the stream server will listen.
	I0115 09:50:02.173160   26437 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0115 09:50:02.173166   26437 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0115 09:50:02.173171   26437 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0115 09:50:02.173178   26437 command_runner.go:130] > # stream_port = "0"
	I0115 09:50:02.173183   26437 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0115 09:50:02.173187   26437 command_runner.go:130] > # stream_enable_tls = false
	I0115 09:50:02.173194   26437 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0115 09:50:02.173201   26437 command_runner.go:130] > # stream_idle_timeout = ""
	I0115 09:50:02.173207   26437 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0115 09:50:02.173215   26437 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0115 09:50:02.173219   26437 command_runner.go:130] > # minutes.
	I0115 09:50:02.173224   26437 command_runner.go:130] > # stream_tls_cert = ""
	I0115 09:50:02.173230   26437 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0115 09:50:02.173238   26437 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0115 09:50:02.173242   26437 command_runner.go:130] > # stream_tls_key = ""
	I0115 09:50:02.173251   26437 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0115 09:50:02.173258   26437 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0115 09:50:02.173263   26437 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0115 09:50:02.173269   26437 command_runner.go:130] > # stream_tls_ca = ""
	I0115 09:50:02.173277   26437 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 09:50:02.173283   26437 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0115 09:50:02.173290   26437 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 09:50:02.173297   26437 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0115 09:50:02.173310   26437 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0115 09:50:02.173318   26437 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0115 09:50:02.173322   26437 command_runner.go:130] > [crio.runtime]
	I0115 09:50:02.173330   26437 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0115 09:50:02.173337   26437 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0115 09:50:02.173348   26437 command_runner.go:130] > # "nofile=1024:2048"
	I0115 09:50:02.173360   26437 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0115 09:50:02.173370   26437 command_runner.go:130] > # default_ulimits = [
	I0115 09:50:02.173379   26437 command_runner.go:130] > # ]
	I0115 09:50:02.173388   26437 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0115 09:50:02.173402   26437 command_runner.go:130] > # no_pivot = false
	I0115 09:50:02.173414   26437 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0115 09:50:02.173424   26437 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0115 09:50:02.173435   26437 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0115 09:50:02.173444   26437 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0115 09:50:02.173454   26437 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0115 09:50:02.173467   26437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 09:50:02.173483   26437 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0115 09:50:02.173490   26437 command_runner.go:130] > # Cgroup setting for conmon
	I0115 09:50:02.173497   26437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0115 09:50:02.173503   26437 command_runner.go:130] > conmon_cgroup = "pod"
	I0115 09:50:02.173510   26437 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0115 09:50:02.173517   26437 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0115 09:50:02.173524   26437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 09:50:02.173530   26437 command_runner.go:130] > conmon_env = [
	I0115 09:50:02.173538   26437 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0115 09:50:02.173543   26437 command_runner.go:130] > ]
	I0115 09:50:02.173548   26437 command_runner.go:130] > # Additional environment variables to set for all the
	I0115 09:50:02.173562   26437 command_runner.go:130] > # containers. These are overridden if set in the
	I0115 09:50:02.173570   26437 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0115 09:50:02.173577   26437 command_runner.go:130] > # default_env = [
	I0115 09:50:02.173580   26437 command_runner.go:130] > # ]
	I0115 09:50:02.173586   26437 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0115 09:50:02.173593   26437 command_runner.go:130] > # selinux = false
	I0115 09:50:02.173598   26437 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0115 09:50:02.173607   26437 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0115 09:50:02.173613   26437 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0115 09:50:02.173619   26437 command_runner.go:130] > # seccomp_profile = ""
	I0115 09:50:02.173625   26437 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0115 09:50:02.173630   26437 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0115 09:50:02.173637   26437 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0115 09:50:02.173644   26437 command_runner.go:130] > # which might increase security.
	I0115 09:50:02.173648   26437 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0115 09:50:02.173656   26437 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0115 09:50:02.173664   26437 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0115 09:50:02.173671   26437 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0115 09:50:02.173679   26437 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0115 09:50:02.173686   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:50:02.173691   26437 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0115 09:50:02.173697   26437 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0115 09:50:02.173701   26437 command_runner.go:130] > # the cgroup blockio controller.
	I0115 09:50:02.173708   26437 command_runner.go:130] > # blockio_config_file = ""
	I0115 09:50:02.173714   26437 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0115 09:50:02.173718   26437 command_runner.go:130] > # irqbalance daemon.
	I0115 09:50:02.173725   26437 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0115 09:50:02.173731   26437 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0115 09:50:02.173739   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:50:02.173743   26437 command_runner.go:130] > # rdt_config_file = ""
	I0115 09:50:02.173751   26437 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0115 09:50:02.173755   26437 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0115 09:50:02.173763   26437 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0115 09:50:02.173768   26437 command_runner.go:130] > # separate_pull_cgroup = ""
	I0115 09:50:02.173776   26437 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0115 09:50:02.173782   26437 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0115 09:50:02.173790   26437 command_runner.go:130] > # will be added.
	I0115 09:50:02.173798   26437 command_runner.go:130] > # default_capabilities = [
	I0115 09:50:02.173802   26437 command_runner.go:130] > # 	"CHOWN",
	I0115 09:50:02.173806   26437 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0115 09:50:02.173812   26437 command_runner.go:130] > # 	"FSETID",
	I0115 09:50:02.173816   26437 command_runner.go:130] > # 	"FOWNER",
	I0115 09:50:02.173820   26437 command_runner.go:130] > # 	"SETGID",
	I0115 09:50:02.173824   26437 command_runner.go:130] > # 	"SETUID",
	I0115 09:50:02.173829   26437 command_runner.go:130] > # 	"SETPCAP",
	I0115 09:50:02.173833   26437 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0115 09:50:02.173839   26437 command_runner.go:130] > # 	"KILL",
	I0115 09:50:02.173842   26437 command_runner.go:130] > # ]
	I0115 09:50:02.173849   26437 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0115 09:50:02.173857   26437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 09:50:02.173861   26437 command_runner.go:130] > # default_sysctls = [
	I0115 09:50:02.173867   26437 command_runner.go:130] > # ]
	I0115 09:50:02.173871   26437 command_runner.go:130] > # List of devices on the host that a
	I0115 09:50:02.173877   26437 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0115 09:50:02.173884   26437 command_runner.go:130] > # allowed_devices = [
	I0115 09:50:02.173891   26437 command_runner.go:130] > # 	"/dev/fuse",
	I0115 09:50:02.173894   26437 command_runner.go:130] > # ]
	I0115 09:50:02.173899   26437 command_runner.go:130] > # List of additional devices. specified as
	I0115 09:50:02.173906   26437 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0115 09:50:02.173910   26437 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0115 09:50:02.173934   26437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 09:50:02.173938   26437 command_runner.go:130] > # additional_devices = [
	I0115 09:50:02.173942   26437 command_runner.go:130] > # ]
	I0115 09:50:02.173946   26437 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0115 09:50:02.173950   26437 command_runner.go:130] > # cdi_spec_dirs = [
	I0115 09:50:02.173954   26437 command_runner.go:130] > # 	"/etc/cdi",
	I0115 09:50:02.173958   26437 command_runner.go:130] > # 	"/var/run/cdi",
	I0115 09:50:02.173961   26437 command_runner.go:130] > # ]
	I0115 09:50:02.173967   26437 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0115 09:50:02.173973   26437 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0115 09:50:02.173977   26437 command_runner.go:130] > # Defaults to false.
	I0115 09:50:02.173982   26437 command_runner.go:130] > # device_ownership_from_security_context = false
	I0115 09:50:02.173990   26437 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0115 09:50:02.173998   26437 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0115 09:50:02.174002   26437 command_runner.go:130] > # hooks_dir = [
	I0115 09:50:02.174007   26437 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0115 09:50:02.174010   26437 command_runner.go:130] > # ]
	I0115 09:50:02.174016   26437 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0115 09:50:02.174024   26437 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0115 09:50:02.174029   26437 command_runner.go:130] > # its default mounts from the following two files:
	I0115 09:50:02.174034   26437 command_runner.go:130] > #
	I0115 09:50:02.174040   26437 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0115 09:50:02.174049   26437 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0115 09:50:02.174055   26437 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0115 09:50:02.174060   26437 command_runner.go:130] > #
	I0115 09:50:02.174066   26437 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0115 09:50:02.174072   26437 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0115 09:50:02.174080   26437 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0115 09:50:02.174085   26437 command_runner.go:130] > #      only add mounts it finds in this file.
	I0115 09:50:02.174091   26437 command_runner.go:130] > #
	I0115 09:50:02.174096   26437 command_runner.go:130] > # default_mounts_file = ""
	I0115 09:50:02.174105   26437 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0115 09:50:02.174111   26437 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0115 09:50:02.174118   26437 command_runner.go:130] > pids_limit = 1024
	I0115 09:50:02.174124   26437 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0115 09:50:02.174132   26437 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0115 09:50:02.174138   26437 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0115 09:50:02.174148   26437 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0115 09:50:02.174153   26437 command_runner.go:130] > # log_size_max = -1
	I0115 09:50:02.174161   26437 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0115 09:50:02.174166   26437 command_runner.go:130] > # log_to_journald = false
	I0115 09:50:02.174174   26437 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0115 09:50:02.174179   26437 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0115 09:50:02.174186   26437 command_runner.go:130] > # Path to directory for container attach sockets.
	I0115 09:50:02.174191   26437 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0115 09:50:02.174199   26437 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0115 09:50:02.174203   26437 command_runner.go:130] > # bind_mount_prefix = ""
	I0115 09:50:02.174211   26437 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0115 09:50:02.174215   26437 command_runner.go:130] > # read_only = false
	I0115 09:50:02.174223   26437 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0115 09:50:02.174229   26437 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0115 09:50:02.174235   26437 command_runner.go:130] > # live configuration reload.
	I0115 09:50:02.174240   26437 command_runner.go:130] > # log_level = "info"
	I0115 09:50:02.174247   26437 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0115 09:50:02.174253   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:50:02.174257   26437 command_runner.go:130] > # log_filter = ""
	I0115 09:50:02.174264   26437 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0115 09:50:02.174272   26437 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0115 09:50:02.174276   26437 command_runner.go:130] > # separated by comma.
	I0115 09:50:02.174283   26437 command_runner.go:130] > # uid_mappings = ""
	I0115 09:50:02.174288   26437 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0115 09:50:02.174297   26437 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0115 09:50:02.174301   26437 command_runner.go:130] > # separated by comma.
	I0115 09:50:02.174305   26437 command_runner.go:130] > # gid_mappings = ""
	I0115 09:50:02.174311   26437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0115 09:50:02.174337   26437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 09:50:02.174352   26437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 09:50:02.174359   26437 command_runner.go:130] > # minimum_mappable_uid = -1
	I0115 09:50:02.174369   26437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0115 09:50:02.174384   26437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 09:50:02.174397   26437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 09:50:02.174405   26437 command_runner.go:130] > # minimum_mappable_gid = -1
	I0115 09:50:02.174429   26437 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0115 09:50:02.174442   26437 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0115 09:50:02.174452   26437 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0115 09:50:02.174459   26437 command_runner.go:130] > # ctr_stop_timeout = 30
	I0115 09:50:02.174465   26437 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0115 09:50:02.174474   26437 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0115 09:50:02.174485   26437 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0115 09:50:02.174490   26437 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0115 09:50:02.174496   26437 command_runner.go:130] > drop_infra_ctr = false
	I0115 09:50:02.174502   26437 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0115 09:50:02.174510   26437 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0115 09:50:02.174517   26437 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0115 09:50:02.174524   26437 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0115 09:50:02.174531   26437 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0115 09:50:02.174538   26437 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0115 09:50:02.174543   26437 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0115 09:50:02.174552   26437 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0115 09:50:02.174556   26437 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0115 09:50:02.174565   26437 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0115 09:50:02.174571   26437 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0115 09:50:02.174579   26437 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0115 09:50:02.174584   26437 command_runner.go:130] > # default_runtime = "runc"
	I0115 09:50:02.174594   26437 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0115 09:50:02.174601   26437 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0115 09:50:02.174615   26437 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0115 09:50:02.174620   26437 command_runner.go:130] > # creation as a file is not desired either.
	I0115 09:50:02.174627   26437 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0115 09:50:02.174632   26437 command_runner.go:130] > # the hostname is being managed dynamically.
	I0115 09:50:02.174636   26437 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0115 09:50:02.174639   26437 command_runner.go:130] > # ]
	I0115 09:50:02.174649   26437 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0115 09:50:02.174655   26437 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0115 09:50:02.174661   26437 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0115 09:50:02.174669   26437 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0115 09:50:02.174672   26437 command_runner.go:130] > #
	I0115 09:50:02.174677   26437 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0115 09:50:02.174681   26437 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0115 09:50:02.174685   26437 command_runner.go:130] > #  runtime_type = "oci"
	I0115 09:50:02.174697   26437 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0115 09:50:02.174702   26437 command_runner.go:130] > #  privileged_without_host_devices = false
	I0115 09:50:02.174706   26437 command_runner.go:130] > #  allowed_annotations = []
	I0115 09:50:02.174710   26437 command_runner.go:130] > # Where:
	I0115 09:50:02.174715   26437 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0115 09:50:02.174722   26437 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0115 09:50:02.174731   26437 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0115 09:50:02.174737   26437 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0115 09:50:02.174743   26437 command_runner.go:130] > #   in $PATH.
	I0115 09:50:02.174749   26437 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0115 09:50:02.174760   26437 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0115 09:50:02.174769   26437 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0115 09:50:02.174773   26437 command_runner.go:130] > #   state.
	I0115 09:50:02.174781   26437 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0115 09:50:02.174787   26437 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0115 09:50:02.174795   26437 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0115 09:50:02.174801   26437 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0115 09:50:02.174813   26437 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0115 09:50:02.174819   26437 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0115 09:50:02.174829   26437 command_runner.go:130] > #   The currently recognized values are:
	I0115 09:50:02.174835   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0115 09:50:02.174844   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0115 09:50:02.174850   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0115 09:50:02.174856   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0115 09:50:02.174866   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0115 09:50:02.174873   26437 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0115 09:50:02.174881   26437 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0115 09:50:02.174888   26437 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0115 09:50:02.174896   26437 command_runner.go:130] > #   should be moved to the container's cgroup
	I0115 09:50:02.174900   26437 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0115 09:50:02.174907   26437 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0115 09:50:02.174911   26437 command_runner.go:130] > runtime_type = "oci"
	I0115 09:50:02.174918   26437 command_runner.go:130] > runtime_root = "/run/runc"
	I0115 09:50:02.174922   26437 command_runner.go:130] > runtime_config_path = ""
	I0115 09:50:02.174927   26437 command_runner.go:130] > monitor_path = ""
	I0115 09:50:02.174931   26437 command_runner.go:130] > monitor_cgroup = ""
	I0115 09:50:02.174935   26437 command_runner.go:130] > monitor_exec_cgroup = ""
	I0115 09:50:02.174943   26437 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0115 09:50:02.174947   26437 command_runner.go:130] > # running containers
	I0115 09:50:02.174951   26437 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0115 09:50:02.174959   26437 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0115 09:50:02.174990   26437 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0115 09:50:02.174999   26437 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0115 09:50:02.175004   26437 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0115 09:50:02.175009   26437 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0115 09:50:02.175013   26437 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0115 09:50:02.175020   26437 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0115 09:50:02.175027   26437 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0115 09:50:02.175032   26437 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0115 09:50:02.175041   26437 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0115 09:50:02.175047   26437 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0115 09:50:02.175053   26437 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0115 09:50:02.175060   26437 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0115 09:50:02.175070   26437 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0115 09:50:02.175076   26437 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0115 09:50:02.175087   26437 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0115 09:50:02.175095   26437 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0115 09:50:02.175103   26437 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0115 09:50:02.175109   26437 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0115 09:50:02.175116   26437 command_runner.go:130] > # Example:
	I0115 09:50:02.175121   26437 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0115 09:50:02.175126   26437 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0115 09:50:02.175133   26437 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0115 09:50:02.175138   26437 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0115 09:50:02.175145   26437 command_runner.go:130] > # cpuset = 0
	I0115 09:50:02.175149   26437 command_runner.go:130] > # cpushares = "0-1"
	I0115 09:50:02.175155   26437 command_runner.go:130] > # Where:
	I0115 09:50:02.175160   26437 command_runner.go:130] > # The workload name is workload-type.
	I0115 09:50:02.175169   26437 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0115 09:50:02.175175   26437 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0115 09:50:02.175180   26437 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0115 09:50:02.175190   26437 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0115 09:50:02.175196   26437 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0115 09:50:02.175202   26437 command_runner.go:130] > # 
	I0115 09:50:02.175208   26437 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0115 09:50:02.175214   26437 command_runner.go:130] > #
	I0115 09:50:02.175219   26437 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0115 09:50:02.175225   26437 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0115 09:50:02.175234   26437 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0115 09:50:02.175240   26437 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0115 09:50:02.175248   26437 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0115 09:50:02.175252   26437 command_runner.go:130] > [crio.image]
	I0115 09:50:02.175260   26437 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0115 09:50:02.175274   26437 command_runner.go:130] > # default_transport = "docker://"
	I0115 09:50:02.175283   26437 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0115 09:50:02.175289   26437 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0115 09:50:02.175295   26437 command_runner.go:130] > # global_auth_file = ""
	I0115 09:50:02.175300   26437 command_runner.go:130] > # The image used to instantiate infra containers.
	I0115 09:50:02.175305   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:50:02.175309   26437 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0115 09:50:02.175316   26437 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0115 09:50:02.175321   26437 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0115 09:50:02.175326   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:50:02.175330   26437 command_runner.go:130] > # pause_image_auth_file = ""
	I0115 09:50:02.175336   26437 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0115 09:50:02.175345   26437 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0115 09:50:02.175355   26437 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0115 09:50:02.175364   26437 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0115 09:50:02.175371   26437 command_runner.go:130] > # pause_command = "/pause"
	I0115 09:50:02.175380   26437 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0115 09:50:02.175391   26437 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0115 09:50:02.175401   26437 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0115 09:50:02.175410   26437 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0115 09:50:02.175419   26437 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0115 09:50:02.175426   26437 command_runner.go:130] > # signature_policy = ""
	I0115 09:50:02.175436   26437 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0115 09:50:02.175445   26437 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0115 09:50:02.175451   26437 command_runner.go:130] > # changing them here.
	I0115 09:50:02.175457   26437 command_runner.go:130] > # insecure_registries = [
	I0115 09:50:02.175463   26437 command_runner.go:130] > # ]
	I0115 09:50:02.175471   26437 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0115 09:50:02.175484   26437 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0115 09:50:02.175489   26437 command_runner.go:130] > # image_volumes = "mkdir"
	I0115 09:50:02.175494   26437 command_runner.go:130] > # Temporary directory to use for storing big files
	I0115 09:50:02.175498   26437 command_runner.go:130] > # big_files_temporary_dir = ""
	I0115 09:50:02.175504   26437 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0115 09:50:02.175508   26437 command_runner.go:130] > # CNI plugins.
	I0115 09:50:02.175511   26437 command_runner.go:130] > [crio.network]
	I0115 09:50:02.175517   26437 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0115 09:50:02.175522   26437 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0115 09:50:02.175526   26437 command_runner.go:130] > # cni_default_network = ""
	I0115 09:50:02.175531   26437 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0115 09:50:02.175536   26437 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0115 09:50:02.175542   26437 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0115 09:50:02.175545   26437 command_runner.go:130] > # plugin_dirs = [
	I0115 09:50:02.175549   26437 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0115 09:50:02.175555   26437 command_runner.go:130] > # ]
	I0115 09:50:02.175560   26437 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0115 09:50:02.175564   26437 command_runner.go:130] > [crio.metrics]
	I0115 09:50:02.175569   26437 command_runner.go:130] > # Globally enable or disable metrics support.
	I0115 09:50:02.175572   26437 command_runner.go:130] > enable_metrics = true
	I0115 09:50:02.175577   26437 command_runner.go:130] > # Specify enabled metrics collectors.
	I0115 09:50:02.175581   26437 command_runner.go:130] > # Per default all metrics are enabled.
	I0115 09:50:02.175587   26437 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0115 09:50:02.175597   26437 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0115 09:50:02.175606   26437 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0115 09:50:02.175611   26437 command_runner.go:130] > # metrics_collectors = [
	I0115 09:50:02.175616   26437 command_runner.go:130] > # 	"operations",
	I0115 09:50:02.175621   26437 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0115 09:50:02.175630   26437 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0115 09:50:02.175634   26437 command_runner.go:130] > # 	"operations_errors",
	I0115 09:50:02.175639   26437 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0115 09:50:02.175645   26437 command_runner.go:130] > # 	"image_pulls_by_name",
	I0115 09:50:02.175650   26437 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0115 09:50:02.175657   26437 command_runner.go:130] > # 	"image_pulls_failures",
	I0115 09:50:02.175661   26437 command_runner.go:130] > # 	"image_pulls_successes",
	I0115 09:50:02.175665   26437 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0115 09:50:02.175670   26437 command_runner.go:130] > # 	"image_layer_reuse",
	I0115 09:50:02.175674   26437 command_runner.go:130] > # 	"containers_oom_total",
	I0115 09:50:02.175678   26437 command_runner.go:130] > # 	"containers_oom",
	I0115 09:50:02.175682   26437 command_runner.go:130] > # 	"processes_defunct",
	I0115 09:50:02.175687   26437 command_runner.go:130] > # 	"operations_total",
	I0115 09:50:02.175694   26437 command_runner.go:130] > # 	"operations_latency_seconds",
	I0115 09:50:02.175699   26437 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0115 09:50:02.175706   26437 command_runner.go:130] > # 	"operations_errors_total",
	I0115 09:50:02.175710   26437 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0115 09:50:02.175714   26437 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0115 09:50:02.175719   26437 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0115 09:50:02.175726   26437 command_runner.go:130] > # 	"image_pulls_success_total",
	I0115 09:50:02.175730   26437 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0115 09:50:02.175735   26437 command_runner.go:130] > # 	"containers_oom_count_total",
	I0115 09:50:02.175739   26437 command_runner.go:130] > # ]
	I0115 09:50:02.175744   26437 command_runner.go:130] > # The port on which the metrics server will listen.
	I0115 09:50:02.175749   26437 command_runner.go:130] > # metrics_port = 9090
	I0115 09:50:02.175754   26437 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0115 09:50:02.175760   26437 command_runner.go:130] > # metrics_socket = ""
	I0115 09:50:02.175765   26437 command_runner.go:130] > # The certificate for the secure metrics server.
	I0115 09:50:02.175774   26437 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0115 09:50:02.175780   26437 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0115 09:50:02.175786   26437 command_runner.go:130] > # certificate on any modification event.
	I0115 09:50:02.175790   26437 command_runner.go:130] > # metrics_cert = ""
	I0115 09:50:02.175797   26437 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0115 09:50:02.175802   26437 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0115 09:50:02.175809   26437 command_runner.go:130] > # metrics_key = ""
	I0115 09:50:02.175815   26437 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0115 09:50:02.175821   26437 command_runner.go:130] > [crio.tracing]
	I0115 09:50:02.175826   26437 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0115 09:50:02.175830   26437 command_runner.go:130] > # enable_tracing = false
	I0115 09:50:02.175836   26437 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0115 09:50:02.175843   26437 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0115 09:50:02.175848   26437 command_runner.go:130] > # Number of samples to collect per million spans.
	I0115 09:50:02.175855   26437 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0115 09:50:02.175860   26437 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0115 09:50:02.175866   26437 command_runner.go:130] > [crio.stats]
	I0115 09:50:02.175872   26437 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0115 09:50:02.175878   26437 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0115 09:50:02.175882   26437 command_runner.go:130] > # stats_collection_period = 0
	I0115 09:50:02.175952   26437 cni.go:84] Creating CNI manager for ""
	I0115 09:50:02.175961   26437 cni.go:136] 1 nodes found, recommending kindnet
	I0115 09:50:02.175976   26437 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 09:50:02.175992   26437 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-975382 NodeName:multinode-975382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 09:50:02.176118   26437 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-975382"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 09:50:02.176207   26437 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-975382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 09:50:02.176254   26437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 09:50:02.186014   26437 command_runner.go:130] > kubeadm
	I0115 09:50:02.186038   26437 command_runner.go:130] > kubectl
	I0115 09:50:02.186044   26437 command_runner.go:130] > kubelet
	I0115 09:50:02.186067   26437 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 09:50:02.186123   26437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 09:50:02.195412   26437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0115 09:50:02.210853   26437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 09:50:02.226698   26437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0115 09:50:02.242343   26437 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0115 09:50:02.246022   26437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:50:02.257151   26437 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382 for IP: 192.168.39.217
	I0115 09:50:02.257180   26437 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:50:02.257347   26437 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 09:50:02.257407   26437 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 09:50:02.257453   26437 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key
	I0115 09:50:02.257465   26437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt with IP's: []
	I0115 09:50:02.362427   26437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt ...
	I0115 09:50:02.362458   26437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt: {Name:mkbbac2c2efe60e7bc8765e123bceec6f9a6b4fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:50:02.362644   26437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key ...
	I0115 09:50:02.362657   26437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key: {Name:mk827a66c27cc17bf0c8b058baa0f42e7c0f3c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:50:02.362772   26437 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key.891f873f
	I0115 09:50:02.362791   26437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.crt.891f873f with IP's: [192.168.39.217 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 09:50:02.666041   26437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.crt.891f873f ...
	I0115 09:50:02.666067   26437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.crt.891f873f: {Name:mk0e39c8f0dc5959d267d10c2eaa5fd072c498de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:50:02.666231   26437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key.891f873f ...
	I0115 09:50:02.666246   26437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key.891f873f: {Name:mkd23571fffba9ead00866753fcc4f3b09da0388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:50:02.666336   26437 certs.go:337] copying /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.crt.891f873f -> /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.crt
	I0115 09:50:02.666440   26437 certs.go:341] copying /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key.891f873f -> /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key
	I0115 09:50:02.666518   26437 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.key
	I0115 09:50:02.666535   26437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.crt with IP's: []
	I0115 09:50:02.923353   26437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.crt ...
	I0115 09:50:02.923381   26437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.crt: {Name:mk73786a5ad343d18caeb478c13c5e0d8a8989bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:50:02.923549   26437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.key ...
	I0115 09:50:02.923567   26437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.key: {Name:mk600c254a54b5357ff5afcd0d17286d521632dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:50:02.923656   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 09:50:02.923676   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 09:50:02.923686   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 09:50:02.923698   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 09:50:02.923710   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 09:50:02.923722   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 09:50:02.923738   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 09:50:02.923751   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 09:50:02.923803   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 09:50:02.923842   26437 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 09:50:02.923856   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 09:50:02.923884   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 09:50:02.923909   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 09:50:02.923932   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 09:50:02.923968   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 09:50:02.923995   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem -> /usr/share/ca-certificates/13482.pem
	I0115 09:50:02.924008   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /usr/share/ca-certificates/134822.pem
	I0115 09:50:02.924020   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:50:02.924628   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 09:50:02.947032   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 09:50:02.970389   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 09:50:02.991947   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 09:50:03.013272   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 09:50:03.034252   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 09:50:03.055800   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 09:50:03.077071   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 09:50:03.097985   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 09:50:03.119785   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 09:50:03.141666   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 09:50:03.164838   26437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 09:50:03.182317   26437 ssh_runner.go:195] Run: openssl version
	I0115 09:50:03.188187   26437 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0115 09:50:03.188261   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 09:50:03.199215   26437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 09:50:03.203643   26437 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 09:50:03.203767   26437 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 09:50:03.203813   26437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 09:50:03.209178   26437 command_runner.go:130] > 3ec20f2e
	I0115 09:50:03.209244   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 09:50:03.219607   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 09:50:03.230038   26437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:50:03.234455   26437 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:50:03.234793   26437 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:50:03.234837   26437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:50:03.239931   26437 command_runner.go:130] > b5213941
	I0115 09:50:03.240220   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 09:50:03.250717   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 09:50:03.262149   26437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 09:50:03.266620   26437 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 09:50:03.266833   26437 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 09:50:03.266888   26437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 09:50:03.272127   26437 command_runner.go:130] > 51391683
	I0115 09:50:03.272203   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 09:50:03.283054   26437 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 09:50:03.287121   26437 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:50:03.287171   26437 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:50:03.287220   26437 kubeadm.go:404] StartCluster: {Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:50:03.287315   26437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 09:50:03.287357   26437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 09:50:03.324953   26437 cri.go:89] found id: ""
	I0115 09:50:03.325043   26437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 09:50:03.334724   26437 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0115 09:50:03.334751   26437 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0115 09:50:03.334761   26437 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0115 09:50:03.334969   26437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 09:50:03.344718   26437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 09:50:03.353930   26437 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0115 09:50:03.353955   26437 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0115 09:50:03.353967   26437 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0115 09:50:03.353978   26437 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 09:50:03.354314   26437 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 09:50:03.354351   26437 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0115 09:50:03.461547   26437 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 09:50:03.461614   26437 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0115 09:50:03.461686   26437 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 09:50:03.461699   26437 command_runner.go:130] > [preflight] Running pre-flight checks
	I0115 09:50:03.701427   26437 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 09:50:03.701461   26437 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 09:50:03.701609   26437 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 09:50:03.701649   26437 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 09:50:03.701786   26437 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 09:50:03.701800   26437 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 09:50:03.928505   26437 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 09:50:03.928570   26437 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 09:50:04.070223   26437 out.go:204]   - Generating certificates and keys ...
	I0115 09:50:04.070319   26437 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0115 09:50:04.070333   26437 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 09:50:04.070421   26437 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0115 09:50:04.070441   26437 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 09:50:04.264005   26437 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 09:50:04.264030   26437 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 09:50:04.483417   26437 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 09:50:04.483442   26437 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0115 09:50:04.586742   26437 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 09:50:04.586765   26437 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0115 09:50:04.727426   26437 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 09:50:04.727450   26437 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0115 09:50:04.847529   26437 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 09:50:04.847552   26437 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0115 09:50:04.847704   26437 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-975382] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0115 09:50:04.847727   26437 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-975382] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0115 09:50:05.071281   26437 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 09:50:05.071307   26437 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0115 09:50:05.071433   26437 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-975382] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0115 09:50:05.071448   26437 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-975382] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0115 09:50:05.557249   26437 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 09:50:05.557283   26437 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 09:50:05.734462   26437 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 09:50:05.734490   26437 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 09:50:05.992692   26437 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 09:50:05.992718   26437 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0115 09:50:05.992964   26437 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 09:50:05.992975   26437 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 09:50:06.104410   26437 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 09:50:06.104435   26437 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 09:50:06.296997   26437 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 09:50:06.297041   26437 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 09:50:06.438585   26437 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 09:50:06.438612   26437 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 09:50:06.674449   26437 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 09:50:06.674481   26437 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 09:50:06.675085   26437 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 09:50:06.675105   26437 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 09:50:06.678315   26437 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 09:50:06.680215   26437 out.go:204]   - Booting up control plane ...
	I0115 09:50:06.678361   26437 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 09:50:06.680304   26437 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 09:50:06.680316   26437 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 09:50:06.680393   26437 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 09:50:06.680405   26437 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 09:50:06.680719   26437 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 09:50:06.680739   26437 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 09:50:06.696430   26437 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:50:06.696483   26437 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:50:06.697373   26437 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:50:06.697389   26437 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:50:06.697421   26437 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 09:50:06.697438   26437 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0115 09:50:06.824814   26437 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 09:50:06.824844   26437 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 09:50:14.823625   26437 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003761 seconds
	I0115 09:50:14.823654   26437 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.003761 seconds
	I0115 09:50:14.823772   26437 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 09:50:14.823781   26437 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 09:50:14.838132   26437 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 09:50:14.838179   26437 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 09:50:15.369820   26437 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 09:50:15.369849   26437 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0115 09:50:15.370093   26437 kubeadm.go:322] [mark-control-plane] Marking the node multinode-975382 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 09:50:15.370109   26437 command_runner.go:130] > [mark-control-plane] Marking the node multinode-975382 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 09:50:15.894509   26437 kubeadm.go:322] [bootstrap-token] Using token: ou2s1b.ghu9p96kgwq2x2so
	I0115 09:50:15.896344   26437 out.go:204]   - Configuring RBAC rules ...
	I0115 09:50:15.894602   26437 command_runner.go:130] > [bootstrap-token] Using token: ou2s1b.ghu9p96kgwq2x2so
	I0115 09:50:15.896498   26437 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 09:50:15.896514   26437 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 09:50:15.908295   26437 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 09:50:15.908318   26437 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 09:50:15.920171   26437 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 09:50:15.920199   26437 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 09:50:15.923698   26437 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 09:50:15.923727   26437 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 09:50:15.927814   26437 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 09:50:15.927829   26437 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 09:50:15.933524   26437 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 09:50:15.933545   26437 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 09:50:15.955292   26437 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 09:50:15.955316   26437 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 09:50:16.279122   26437 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 09:50:16.279151   26437 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0115 09:50:16.325646   26437 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 09:50:16.325674   26437 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0115 09:50:16.326514   26437 kubeadm.go:322] 
	I0115 09:50:16.326577   26437 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 09:50:16.326600   26437 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0115 09:50:16.326607   26437 kubeadm.go:322] 
	I0115 09:50:16.326706   26437 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 09:50:16.326714   26437 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0115 09:50:16.326718   26437 kubeadm.go:322] 
	I0115 09:50:16.326743   26437 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 09:50:16.326771   26437 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0115 09:50:16.326842   26437 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 09:50:16.326854   26437 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 09:50:16.326928   26437 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 09:50:16.326936   26437 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 09:50:16.326940   26437 kubeadm.go:322] 
	I0115 09:50:16.327026   26437 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 09:50:16.327038   26437 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0115 09:50:16.327050   26437 kubeadm.go:322] 
	I0115 09:50:16.327103   26437 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 09:50:16.327110   26437 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 09:50:16.327116   26437 kubeadm.go:322] 
	I0115 09:50:16.327179   26437 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 09:50:16.327188   26437 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0115 09:50:16.327296   26437 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 09:50:16.327308   26437 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 09:50:16.327406   26437 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 09:50:16.327416   26437 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 09:50:16.327422   26437 kubeadm.go:322] 
	I0115 09:50:16.327543   26437 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 09:50:16.327571   26437 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0115 09:50:16.327693   26437 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 09:50:16.327708   26437 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0115 09:50:16.327716   26437 kubeadm.go:322] 
	I0115 09:50:16.327837   26437 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ou2s1b.ghu9p96kgwq2x2so \
	I0115 09:50:16.327849   26437 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ou2s1b.ghu9p96kgwq2x2so \
	I0115 09:50:16.327981   26437 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 \
	I0115 09:50:16.327993   26437 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 \
	I0115 09:50:16.328031   26437 kubeadm.go:322] 	--control-plane 
	I0115 09:50:16.328044   26437 command_runner.go:130] > 	--control-plane 
	I0115 09:50:16.328066   26437 kubeadm.go:322] 
	I0115 09:50:16.328189   26437 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 09:50:16.328202   26437 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0115 09:50:16.328219   26437 kubeadm.go:322] 
	I0115 09:50:16.328345   26437 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ou2s1b.ghu9p96kgwq2x2so \
	I0115 09:50:16.328355   26437 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ou2s1b.ghu9p96kgwq2x2so \
	I0115 09:50:16.328495   26437 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 09:50:16.328526   26437 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 09:50:16.328635   26437 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:50:16.328651   26437 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:50:16.328661   26437 cni.go:84] Creating CNI manager for ""
	I0115 09:50:16.328671   26437 cni.go:136] 1 nodes found, recommending kindnet
	I0115 09:50:16.330605   26437 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 09:50:16.332289   26437 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 09:50:16.339269   26437 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0115 09:50:16.339293   26437 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0115 09:50:16.339302   26437 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0115 09:50:16.339312   26437 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:50:16.339321   26437 command_runner.go:130] > Access: 2024-01-15 09:49:44.546824380 +0000
	I0115 09:50:16.339331   26437 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0115 09:50:16.339344   26437 command_runner.go:130] > Change: 2024-01-15 09:49:42.733824380 +0000
	I0115 09:50:16.339351   26437 command_runner.go:130] >  Birth: -
	I0115 09:50:16.339697   26437 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 09:50:16.339718   26437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 09:50:16.367855   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 09:50:17.380290   26437 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0115 09:50:17.386689   26437 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0115 09:50:17.396187   26437 command_runner.go:130] > serviceaccount/kindnet created
	I0115 09:50:17.412273   26437 command_runner.go:130] > daemonset.apps/kindnet created
	I0115 09:50:17.415703   26437 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.047805541s)
	I0115 09:50:17.415744   26437 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 09:50:17.415812   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:17.415838   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=multinode-975382 minikube.k8s.io/updated_at=2024_01_15T09_50_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:17.442090   26437 command_runner.go:130] > -16
	I0115 09:50:17.442143   26437 ops.go:34] apiserver oom_adj: -16
	I0115 09:50:17.582396   26437 command_runner.go:130] > node/multinode-975382 labeled
	I0115 09:50:17.582500   26437 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0115 09:50:17.582616   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:17.724338   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:18.082835   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:18.165643   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:18.582944   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:18.664923   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:19.083459   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:19.168471   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:19.582898   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:19.667073   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:20.083698   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:20.168099   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:20.582671   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:20.671314   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:21.083061   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:21.179390   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:21.582736   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:21.673154   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:22.082960   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:22.173275   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:22.583008   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:22.667576   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:23.082659   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:23.164994   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:23.582780   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:23.670207   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:24.083391   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:24.171387   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:24.583203   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:24.665417   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:25.083655   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:25.162197   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:25.583218   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:25.672600   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:26.083023   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:26.172580   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:26.582814   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:26.698495   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:27.083186   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:27.169889   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:27.583415   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:27.720051   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:28.082768   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:28.268777   26437 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:50:28.583316   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:50:28.669487   26437 command_runner.go:130] > NAME      SECRETS   AGE
	I0115 09:50:28.670113   26437 command_runner.go:130] > default   0         0s
	I0115 09:50:28.672104   26437 kubeadm.go:1088] duration metric: took 11.256340338s to wait for elevateKubeSystemPrivileges.
	I0115 09:50:28.672135   26437 kubeadm.go:406] StartCluster complete in 25.384921099s
	I0115 09:50:28.672152   26437 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:50:28.672223   26437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:50:28.673125   26437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:50:28.673449   26437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 09:50:28.673599   26437 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 09:50:28.673672   26437 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:50:28.673685   26437 addons.go:69] Setting storage-provisioner=true in profile "multinode-975382"
	I0115 09:50:28.673697   26437 addons.go:69] Setting default-storageclass=true in profile "multinode-975382"
	I0115 09:50:28.673711   26437 addons.go:234] Setting addon storage-provisioner=true in "multinode-975382"
	I0115 09:50:28.673741   26437 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-975382"
	I0115 09:50:28.673766   26437 host.go:66] Checking if "multinode-975382" exists ...
	I0115 09:50:28.673886   26437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:50:28.674173   26437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:50:28.674220   26437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:50:28.674229   26437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:50:28.674251   26437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:50:28.674229   26437 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:50:28.674904   26437 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 09:50:28.675245   26437 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 09:50:28.675267   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:28.675276   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:28.675286   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:28.689623   26437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42117
	I0115 09:50:28.690257   26437 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:50:28.690665   26437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0115 09:50:28.692175   26437 main.go:141] libmachine: Using API Version  1
	I0115 09:50:28.692205   26437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:50:28.692303   26437 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:50:28.692560   26437 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:50:28.692781   26437 main.go:141] libmachine: Using API Version  1
	I0115 09:50:28.692805   26437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:50:28.693133   26437 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:50:28.693150   26437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:50:28.693180   26437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:50:28.693333   26437 main.go:141] libmachine: (multinode-975382) Calling .GetState
	I0115 09:50:28.693509   26437 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0115 09:50:28.693525   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:28.693535   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:28.693549   26437 round_trippers.go:580]     Content-Length: 291
	I0115 09:50:28.693561   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:28 GMT
	I0115 09:50:28.693568   26437 round_trippers.go:580]     Audit-Id: 0f334ec1-7401-431a-93d1-3965b5c56add
	I0115 09:50:28.693580   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:28.693592   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:28.693603   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:28.693634   26437 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9b737f2-ab4d-4b14-b6f0-b06c44cfcbb8","resourceVersion":"382","creationTimestamp":"2024-01-15T09:50:16Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0115 09:50:28.694016   26437 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9b737f2-ab4d-4b14-b6f0-b06c44cfcbb8","resourceVersion":"382","creationTimestamp":"2024-01-15T09:50:16Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0115 09:50:28.694071   26437 round_trippers.go:463] PUT https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 09:50:28.694083   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:28.694094   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:28.694103   26437 round_trippers.go:473]     Content-Type: application/json
	I0115 09:50:28.694110   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:28.695629   26437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:50:28.695935   26437 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:50:28.696254   26437 addons.go:234] Setting addon default-storageclass=true in "multinode-975382"
	I0115 09:50:28.696293   26437 host.go:66] Checking if "multinode-975382" exists ...
	I0115 09:50:28.696704   26437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:50:28.696739   26437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:50:28.704661   26437 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0115 09:50:28.704683   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:28.704694   26437 round_trippers.go:580]     Audit-Id: de345a18-a49f-4c56-b70b-4efe1a58faf3
	I0115 09:50:28.704705   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:28.704717   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:28.704726   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:28.704738   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:28.704750   26437 round_trippers.go:580]     Content-Length: 291
	I0115 09:50:28.704762   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:28 GMT
	I0115 09:50:28.704794   26437 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9b737f2-ab4d-4b14-b6f0-b06c44cfcbb8","resourceVersion":"383","creationTimestamp":"2024-01-15T09:50:16Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0115 09:50:28.707899   26437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I0115 09:50:28.708311   26437 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:50:28.708731   26437 main.go:141] libmachine: Using API Version  1
	I0115 09:50:28.708749   26437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:50:28.709063   26437 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:50:28.709244   26437 main.go:141] libmachine: (multinode-975382) Calling .GetState
	I0115 09:50:28.710787   26437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0115 09:50:28.711127   26437 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:50:28.711127   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:50:28.713307   26437 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:50:28.711519   26437 main.go:141] libmachine: Using API Version  1
	I0115 09:50:28.714752   26437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:50:28.714899   26437 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:50:28.714924   26437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 09:50:28.714949   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:50:28.715096   26437 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:50:28.715674   26437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:50:28.715711   26437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:50:28.718026   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:50:28.718433   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:50:28.718461   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:50:28.718634   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:50:28.718795   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:50:28.718939   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:50:28.719064   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 09:50:28.729666   26437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44925
	I0115 09:50:28.730021   26437 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:50:28.730384   26437 main.go:141] libmachine: Using API Version  1
	I0115 09:50:28.730407   26437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:50:28.730692   26437 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:50:28.730889   26437 main.go:141] libmachine: (multinode-975382) Calling .GetState
	I0115 09:50:28.732015   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:50:28.732293   26437 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 09:50:28.732314   26437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 09:50:28.732332   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:50:28.734529   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:50:28.735013   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:50:28.735047   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:50:28.735160   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:50:28.735324   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:50:28.735463   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:50:28.735599   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 09:50:28.883722   26437 command_runner.go:130] > apiVersion: v1
	I0115 09:50:28.883745   26437 command_runner.go:130] > data:
	I0115 09:50:28.883750   26437 command_runner.go:130] >   Corefile: |
	I0115 09:50:28.883754   26437 command_runner.go:130] >     .:53 {
	I0115 09:50:28.883757   26437 command_runner.go:130] >         errors
	I0115 09:50:28.883762   26437 command_runner.go:130] >         health {
	I0115 09:50:28.883766   26437 command_runner.go:130] >            lameduck 5s
	I0115 09:50:28.883770   26437 command_runner.go:130] >         }
	I0115 09:50:28.883774   26437 command_runner.go:130] >         ready
	I0115 09:50:28.883780   26437 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0115 09:50:28.883787   26437 command_runner.go:130] >            pods insecure
	I0115 09:50:28.883792   26437 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0115 09:50:28.883797   26437 command_runner.go:130] >            ttl 30
	I0115 09:50:28.883802   26437 command_runner.go:130] >         }
	I0115 09:50:28.883806   26437 command_runner.go:130] >         prometheus :9153
	I0115 09:50:28.883826   26437 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0115 09:50:28.883839   26437 command_runner.go:130] >            max_concurrent 1000
	I0115 09:50:28.883843   26437 command_runner.go:130] >         }
	I0115 09:50:28.883846   26437 command_runner.go:130] >         cache 30
	I0115 09:50:28.883850   26437 command_runner.go:130] >         loop
	I0115 09:50:28.883854   26437 command_runner.go:130] >         reload
	I0115 09:50:28.883858   26437 command_runner.go:130] >         loadbalance
	I0115 09:50:28.883862   26437 command_runner.go:130] >     }
	I0115 09:50:28.883866   26437 command_runner.go:130] > kind: ConfigMap
	I0115 09:50:28.883870   26437 command_runner.go:130] > metadata:
	I0115 09:50:28.883876   26437 command_runner.go:130] >   creationTimestamp: "2024-01-15T09:50:16Z"
	I0115 09:50:28.883882   26437 command_runner.go:130] >   name: coredns
	I0115 09:50:28.883886   26437 command_runner.go:130] >   namespace: kube-system
	I0115 09:50:28.883893   26437 command_runner.go:130] >   resourceVersion: "267"
	I0115 09:50:28.883898   26437 command_runner.go:130] >   uid: 8494dd8b-c116-469e-9602-9f697bb20e4e
	I0115 09:50:28.884235   26437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 09:50:28.909434   26437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:50:28.916713   26437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 09:50:29.175318   26437 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 09:50:29.175341   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:29.175349   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:29.175354   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:29.185291   26437 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0115 09:50:29.185324   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:29.185339   26437 round_trippers.go:580]     Audit-Id: 065ee25c-4440-4388-94d2-513498a814ca
	I0115 09:50:29.185345   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:29.185350   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:29.185355   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:29.185360   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:29.185366   26437 round_trippers.go:580]     Content-Length: 291
	I0115 09:50:29.185371   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:29 GMT
	I0115 09:50:29.185416   26437 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9b737f2-ab4d-4b14-b6f0-b06c44cfcbb8","resourceVersion":"393","creationTimestamp":"2024-01-15T09:50:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0115 09:50:29.185518   26437 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-975382" context rescaled to 1 replicas
	I0115 09:50:29.185558   26437 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:50:29.188328   26437 out.go:177] * Verifying Kubernetes components...
	I0115 09:50:29.189757   26437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:50:29.700470   26437 command_runner.go:130] > configmap/coredns replaced
	I0115 09:50:29.703055   26437 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0115 09:50:29.843984   26437 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0115 09:50:29.850139   26437 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0115 09:50:29.861491   26437 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0115 09:50:29.878644   26437 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0115 09:50:29.888533   26437 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0115 09:50:29.906224   26437 command_runner.go:130] > pod/storage-provisioner created
	I0115 09:50:29.908792   26437 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0115 09:50:29.908848   26437 main.go:141] libmachine: Making call to close driver server
	I0115 09:50:29.908867   26437 main.go:141] libmachine: (multinode-975382) Calling .Close
	I0115 09:50:29.908935   26437 main.go:141] libmachine: Making call to close driver server
	I0115 09:50:29.908956   26437 main.go:141] libmachine: (multinode-975382) Calling .Close
	I0115 09:50:29.909190   26437 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:50:29.909236   26437 main.go:141] libmachine: (multinode-975382) DBG | Closing plugin on server side
	I0115 09:50:29.909248   26437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:50:29.909252   26437 main.go:141] libmachine: (multinode-975382) DBG | Closing plugin on server side
	I0115 09:50:29.909261   26437 main.go:141] libmachine: Making call to close driver server
	I0115 09:50:29.909269   26437 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:50:29.909272   26437 main.go:141] libmachine: (multinode-975382) Calling .Close
	I0115 09:50:29.909280   26437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:50:29.909293   26437 main.go:141] libmachine: Making call to close driver server
	I0115 09:50:29.909303   26437 main.go:141] libmachine: (multinode-975382) Calling .Close
	I0115 09:50:29.909312   26437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:50:29.909533   26437 main.go:141] libmachine: (multinode-975382) DBG | Closing plugin on server side
	I0115 09:50:29.909588   26437 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:50:29.909614   26437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:50:29.909664   26437 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:50:29.909693   26437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:50:29.909649   26437 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:50:29.909754   26437 round_trippers.go:463] GET https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses
	I0115 09:50:29.909762   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:29.909772   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:29.909786   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:29.909973   26437 node_ready.go:35] waiting up to 6m0s for node "multinode-975382" to be "Ready" ...
	I0115 09:50:29.910058   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:29.910068   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:29.910079   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:29.910090   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:29.919814   26437 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0115 09:50:29.919834   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:29.919844   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:29.919851   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:29.919865   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:29.919875   26437 round_trippers.go:580]     Content-Length: 1273
	I0115 09:50:29.919883   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:29 GMT
	I0115 09:50:29.919890   26437 round_trippers.go:580]     Audit-Id: dc2a239c-9ac3-4ff4-a3ff-1372e2cace39
	I0115 09:50:29.919903   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:29.920140   26437 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"403"},"items":[{"metadata":{"name":"standard","uid":"d9dd2392-6bc6-4dac-8ac4-943717ca92fb","resourceVersion":"394","creationTimestamp":"2024-01-15T09:50:29Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-15T09:50:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0115 09:50:29.920436   26437 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0115 09:50:29.920455   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:29.920464   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:29.920472   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:29.920484   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:29.920492   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:29.920505   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:29 GMT
	I0115 09:50:29.920517   26437 round_trippers.go:580]     Audit-Id: 84d34f9d-4525-4a39-9d77-ce7a83b36fc0
	I0115 09:50:29.920613   26437 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d9dd2392-6bc6-4dac-8ac4-943717ca92fb","resourceVersion":"394","creationTimestamp":"2024-01-15T09:50:29Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-15T09:50:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0115 09:50:29.920668   26437 round_trippers.go:463] PUT https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0115 09:50:29.920680   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:29.920691   26437 round_trippers.go:473]     Content-Type: application/json
	I0115 09:50:29.920705   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:29.920718   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:29.921229   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"356","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0115 09:50:29.923316   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:29.923331   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:29.923341   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:29 GMT
	I0115 09:50:29.923348   26437 round_trippers.go:580]     Audit-Id: 6ec2acbc-70a9-4548-9efb-d58946e37dbe
	I0115 09:50:29.923356   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:29.923365   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:29.923378   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:29.923390   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:29.923402   26437 round_trippers.go:580]     Content-Length: 1220
	I0115 09:50:29.923432   26437 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d9dd2392-6bc6-4dac-8ac4-943717ca92fb","resourceVersion":"394","creationTimestamp":"2024-01-15T09:50:29Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-15T09:50:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0115 09:50:29.923538   26437 main.go:141] libmachine: Making call to close driver server
	I0115 09:50:29.923553   26437 main.go:141] libmachine: (multinode-975382) Calling .Close
	I0115 09:50:29.923770   26437 main.go:141] libmachine: Successfully made call to close driver server
	I0115 09:50:29.923790   26437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 09:50:29.926431   26437 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0115 09:50:29.928100   26437 addons.go:505] enable addons completed in 1.25450059s: enabled=[storage-provisioner default-storageclass]
	I0115 09:50:30.410343   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:30.410361   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:30.410369   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:30.410374   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:30.413145   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:30.413173   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:30.413183   26437 round_trippers.go:580]     Audit-Id: d128de45-38e2-46d9-9cc5-1fecb68f72e5
	I0115 09:50:30.413191   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:30.413199   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:30.413207   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:30.413213   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:30.413221   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:30 GMT
	I0115 09:50:30.413384   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"356","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0115 09:50:30.911080   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:30.911105   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:30.911113   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:30.911119   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:30.913503   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:30.913520   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:30.913527   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:30.913532   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:30.913537   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:30 GMT
	I0115 09:50:30.913542   26437 round_trippers.go:580]     Audit-Id: e3d8ffd1-48df-4995-82ab-034746b96743
	I0115 09:50:30.913546   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:30.913551   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:30.913727   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"356","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0115 09:50:31.410387   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:31.410422   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:31.410445   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:31.410454   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:31.413797   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:50:31.413812   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:31.413818   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:31.413824   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:31 GMT
	I0115 09:50:31.413828   26437 round_trippers.go:580]     Audit-Id: b8aaf32a-f884-4040-9249-19d4f5c5e378
	I0115 09:50:31.413833   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:31.413840   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:31.413848   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:31.414013   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"356","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0115 09:50:31.910649   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:31.910673   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:31.910680   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:31.910690   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:31.913087   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:31.913105   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:31.913115   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:31.913125   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:31.913133   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:31.913141   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:31 GMT
	I0115 09:50:31.913150   26437 round_trippers.go:580]     Audit-Id: 23ba223c-9c2f-4c3c-9bdd-c0bbe8eff7fd
	I0115 09:50:31.913159   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:31.913517   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"356","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0115 09:50:31.913804   26437 node_ready.go:58] node "multinode-975382" has status "Ready":"False"
	I0115 09:50:32.410142   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:32.410165   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:32.410176   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:32.410184   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:32.412978   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:32.412998   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:32.413005   26437 round_trippers.go:580]     Audit-Id: 0731905d-2aa8-4c5d-8a9f-3cce1eb2ee22
	I0115 09:50:32.413010   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:32.413015   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:32.413023   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:32.413031   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:32.413040   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:32 GMT
	I0115 09:50:32.413410   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"356","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0115 09:50:32.911107   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:32.911132   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:32.911139   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:32.911146   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:32.913732   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:32.913751   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:32.913757   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:32.913762   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:32.913767   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:32 GMT
	I0115 09:50:32.913772   26437 round_trippers.go:580]     Audit-Id: ebfdb2a9-4d09-44bf-9abc-f8f37740b00c
	I0115 09:50:32.913777   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:32.913785   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:32.914265   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"356","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0115 09:50:33.410805   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:33.410829   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:33.410837   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:33.410843   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:33.413908   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:50:33.413932   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:33.413938   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:33.413944   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:33.413949   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:33.413954   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:33.413960   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:33 GMT
	I0115 09:50:33.413965   26437 round_trippers.go:580]     Audit-Id: 271251cf-4042-412b-b97d-586f12b417a2
	I0115 09:50:33.414568   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"356","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0115 09:50:33.910810   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:33.910832   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:33.910840   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:33.910846   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:33.913448   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:33.913465   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:33.913472   26437 round_trippers.go:580]     Audit-Id: dac52346-863e-4672-aa70-8d615dd26095
	I0115 09:50:33.913477   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:33.913483   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:33.913490   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:33.913498   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:33.913504   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:33 GMT
	I0115 09:50:33.913926   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:33.914215   26437 node_ready.go:49] node "multinode-975382" has status "Ready":"True"
	I0115 09:50:33.914230   26437 node_ready.go:38] duration metric: took 4.004234468s waiting for node "multinode-975382" to be "Ready" ...
	I0115 09:50:33.914238   26437 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:50:33.914312   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 09:50:33.914321   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:33.914327   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:33.914333   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:33.917795   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:50:33.917811   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:33.917817   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:33.917822   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:33.917827   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:33 GMT
	I0115 09:50:33.917832   26437 round_trippers.go:580]     Audit-Id: a411e2dc-281a-4b99-a64a-dd019c3be5a9
	I0115 09:50:33.917845   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:33.917853   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:33.919229   26437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"422","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54368 chars]
	I0115 09:50:33.922124   26437 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:33.922185   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 09:50:33.922192   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:33.922199   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:33.922205   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:33.924037   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:50:33.924053   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:33.924062   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:33.924070   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:33.924077   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:33.924086   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:33 GMT
	I0115 09:50:33.924096   26437 round_trippers.go:580]     Audit-Id: bb08f6ca-af8c-4679-8cf6-ea5414433177
	I0115 09:50:33.924107   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:33.924413   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"422","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0115 09:50:33.924773   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:33.924785   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:33.924791   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:33.924797   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:33.926698   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:50:33.926718   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:33.926727   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:33 GMT
	I0115 09:50:33.926736   26437 round_trippers.go:580]     Audit-Id: ca2bb3dc-4999-43d0-b719-d899d9045b92
	I0115 09:50:33.926743   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:33.926754   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:33.926761   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:33.926769   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:33.926956   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:34.422761   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 09:50:34.422792   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:34.422804   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:34.422813   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:34.434081   26437 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0115 09:50:34.434107   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:34.434119   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:34.434127   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:34.434136   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:34.434144   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:34.434151   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:34 GMT
	I0115 09:50:34.434158   26437 round_trippers.go:580]     Audit-Id: 32b9389e-8a77-4745-a529-02f3aabd0d5e
	I0115 09:50:34.435123   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"422","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0115 09:50:34.435608   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:34.435624   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:34.435631   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:34.435639   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:34.441046   26437 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 09:50:34.441068   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:34.441078   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:34.441086   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:34.441094   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:34 GMT
	I0115 09:50:34.441102   26437 round_trippers.go:580]     Audit-Id: e663b2a4-3c17-4a09-915d-75c788042078
	I0115 09:50:34.441110   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:34.441118   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:34.442069   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:34.922675   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 09:50:34.922698   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:34.922706   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:34.922712   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:34.925769   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:50:34.925794   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:34.925805   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:34.925813   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:34.925822   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:34 GMT
	I0115 09:50:34.925830   26437 round_trippers.go:580]     Audit-Id: 8de31c81-43c6-48b6-be3a-3b55deabd908
	I0115 09:50:34.925838   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:34.925847   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:34.926090   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"422","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0115 09:50:34.926530   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:34.926547   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:34.926557   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:34.926567   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:34.928690   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:34.928709   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:34.928719   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:34.928728   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:34.928735   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:34.928746   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:34 GMT
	I0115 09:50:34.928755   26437 round_trippers.go:580]     Audit-Id: a45745e1-ff1a-4edb-9f27-c894cc6a4998
	I0115 09:50:34.928762   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:34.929019   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:35.422672   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 09:50:35.422697   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:35.422704   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:35.422710   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:35.425577   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:35.425601   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:35.425611   26437 round_trippers.go:580]     Audit-Id: 25ba61a5-08c8-4e01-a78c-31765e242e30
	I0115 09:50:35.425619   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:35.425636   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:35.425643   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:35.425655   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:35.425666   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:35 GMT
	I0115 09:50:35.426042   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"422","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0115 09:50:35.426511   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:35.426525   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:35.426532   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:35.426538   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:35.428716   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:35.428737   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:35.428747   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:35 GMT
	I0115 09:50:35.428755   26437 round_trippers.go:580]     Audit-Id: a63917ed-3db9-4a37-a6b5-34cae8cbcfd4
	I0115 09:50:35.428765   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:35.428773   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:35.428783   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:35.428795   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:35.428974   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:35.922624   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 09:50:35.922649   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:35.922657   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:35.922663   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:35.925883   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:50:35.925906   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:35.925915   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:35.925922   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:35.925930   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:35.925955   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:35.925964   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:35 GMT
	I0115 09:50:35.925976   26437 round_trippers.go:580]     Audit-Id: 68715743-1382-43ad-b1d8-5d23f8fe25da
	I0115 09:50:35.926339   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"435","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0115 09:50:35.926786   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:35.926800   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:35.926807   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:35.926813   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:35.928989   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:35.929004   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:35.929010   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:35.929016   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:35 GMT
	I0115 09:50:35.929021   26437 round_trippers.go:580]     Audit-Id: 18054a87-ad78-462e-9018-e599bf56fb7c
	I0115 09:50:35.929026   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:35.929034   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:35.929042   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:35.929295   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:35.929622   26437 pod_ready.go:92] pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace has status "Ready":"True"
	I0115 09:50:35.929639   26437 pod_ready.go:81] duration metric: took 2.007493842s waiting for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:35.929647   26437 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:35.929694   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-975382
	I0115 09:50:35.929702   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:35.929708   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:35.929717   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:35.931789   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:35.931802   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:35.931807   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:35.931813   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:35.931818   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:35 GMT
	I0115 09:50:35.931826   26437 round_trippers.go:580]     Audit-Id: d9d8ad17-bfcd-44a1-80cc-6bf482c52c0b
	I0115 09:50:35.931838   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:35.931850   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:35.932012   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-975382","namespace":"kube-system","uid":"6b8601c3-a366-4171-9221-4b83d091aff7","resourceVersion":"325","creationTimestamp":"2024-01-15T09:50:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.mirror":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.seen":"2024-01-15T09:50:07.549379101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I0115 09:50:35.932428   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:35.932443   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:35.932450   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:35.932456   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:35.934031   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:50:35.934043   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:35.934048   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:35 GMT
	I0115 09:50:35.934053   26437 round_trippers.go:580]     Audit-Id: 044ee360-c04e-4dd6-bc29-f6fe6f95ca3f
	I0115 09:50:35.934058   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:35.934063   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:35.934071   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:35.934076   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:35.934394   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:36.430812   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-975382
	I0115 09:50:36.430834   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.430841   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.430847   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.433573   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:36.433599   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.433609   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.433617   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.433625   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.433633   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.433643   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.433653   26437 round_trippers.go:580]     Audit-Id: dc5cf9a7-4414-421e-870c-330bb20e2d08
	I0115 09:50:36.433940   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-975382","namespace":"kube-system","uid":"6b8601c3-a366-4171-9221-4b83d091aff7","resourceVersion":"325","creationTimestamp":"2024-01-15T09:50:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.mirror":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.seen":"2024-01-15T09:50:07.549379101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I0115 09:50:36.434346   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:36.434360   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.434367   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.434373   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.436678   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:36.436694   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.436700   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.436705   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.436710   26437 round_trippers.go:580]     Audit-Id: f2477645-42c8-42b4-8c3e-26bbbd9c5cfe
	I0115 09:50:36.436715   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.436724   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.436730   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.437291   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:36.929903   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-975382
	I0115 09:50:36.929926   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.929934   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.929940   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.932749   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:36.932772   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.932781   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.932790   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.932798   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.932811   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.932819   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.932830   26437 round_trippers.go:580]     Audit-Id: 4a991521-28d8-4a23-90a9-7d932765eb5e
	I0115 09:50:36.932945   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-975382","namespace":"kube-system","uid":"6b8601c3-a366-4171-9221-4b83d091aff7","resourceVersion":"441","creationTimestamp":"2024-01-15T09:50:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.mirror":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.seen":"2024-01-15T09:50:07.549379101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0115 09:50:36.933387   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:36.933408   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.933419   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.933428   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.936454   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:50:36.936475   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.936485   26437 round_trippers.go:580]     Audit-Id: a20ab821-e30c-479f-bb8b-ca9bb092a79c
	I0115 09:50:36.936492   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.936505   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.936512   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.936522   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.936531   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.936712   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:36.937090   26437 pod_ready.go:92] pod "etcd-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 09:50:36.937109   26437 pod_ready.go:81] duration metric: took 1.00745337s waiting for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:36.937121   26437 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:36.937173   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-975382
	I0115 09:50:36.937181   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.937188   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.937194   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.939262   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:36.939281   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.939290   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.939298   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.939313   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.939321   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.939332   26437 round_trippers.go:580]     Audit-Id: fde843a5-4537-4a89-8b1d-cf2d32708f16
	I0115 09:50:36.939343   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.939782   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-975382","namespace":"kube-system","uid":"0c174d15-48a9-4394-ba76-207b7cbc42a0","resourceVersion":"334","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.mirror":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.seen":"2024-01-15T09:50:16.415736932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0115 09:50:36.940114   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:36.940128   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.940135   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.940140   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.941988   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:50:36.942006   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.942016   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.942025   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.942038   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.942048   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.942059   26437 round_trippers.go:580]     Audit-Id: 75683d89-0dde-4750-a6b7-9c0ca8a4aa56
	I0115 09:50:36.942067   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.942228   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:36.942553   26437 pod_ready.go:92] pod "kube-apiserver-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 09:50:36.942568   26437 pod_ready.go:81] duration metric: took 5.441071ms waiting for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:36.942576   26437 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:36.942614   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-975382
	I0115 09:50:36.942621   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.942627   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.942633   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.944604   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:50:36.944622   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.944632   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.944639   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.944651   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.944659   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.944669   26437 round_trippers.go:580]     Audit-Id: a96f3729-122d-4fb0-9097-b4c4ad2ee0c9
	I0115 09:50:36.944681   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.944835   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-975382","namespace":"kube-system","uid":"0fabcc70-f923-40a7-86b4-70c0cc2213ce","resourceVersion":"335","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.mirror":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.seen":"2024-01-15T09:50:16.415738247Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0115 09:50:36.945162   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:36.945175   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.945182   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.945188   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.947526   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:36.947545   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.947554   26437 round_trippers.go:580]     Audit-Id: 366d7f99-9999-4a34-926b-cdc18b3a9c9d
	I0115 09:50:36.947562   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.947571   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.947578   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.947586   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.947594   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.948407   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:36.948747   26437 pod_ready.go:92] pod "kube-controller-manager-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 09:50:36.948769   26437 pod_ready.go:81] duration metric: took 6.186787ms waiting for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:36.948781   26437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:36.948836   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 09:50:36.948847   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.948857   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.948863   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.951037   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:36.951050   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.951056   26437 round_trippers.go:580]     Audit-Id: d920a829-5c7c-49ea-aa69-060ffaef9348
	I0115 09:50:36.951061   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.951066   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.951071   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.951076   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.951084   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.951532   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgsx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"a779cea9-5532-4d69-9e49-ac2879e028ec","resourceVersion":"408","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0115 09:50:36.951946   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:36.951960   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.951969   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.951975   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.954340   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:36.954360   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.954369   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.954377   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.954386   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.954394   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.954408   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.954427   26437 round_trippers.go:580]     Audit-Id: 11c34946-8cc9-4a42-a7e7-440811173ba9
	I0115 09:50:36.955189   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:36.955539   26437 pod_ready.go:92] pod "kube-proxy-jgsx4" in "kube-system" namespace has status "Ready":"True"
	I0115 09:50:36.955558   26437 pod_ready.go:81] duration metric: took 6.769385ms waiting for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:36.955568   26437 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:36.955614   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 09:50:36.955622   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:36.955628   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:36.955634   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:36.957446   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:50:36.957458   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:36.957464   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:36.957470   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:36 GMT
	I0115 09:50:36.957475   26437 round_trippers.go:580]     Audit-Id: 3a98da0a-9588-40b7-806f-0bb9edb7023a
	I0115 09:50:36.957481   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:36.957486   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:36.957493   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:36.957758   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-975382","namespace":"kube-system","uid":"d7c93aee-4d7c-4264-8d65-de8781105178","resourceVersion":"440","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.mirror":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.seen":"2024-01-15T09:50:16.415739183Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0115 09:50:37.123381   26437 request.go:629] Waited for 165.327397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:37.123451   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:50:37.123459   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:37.123471   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:37.123485   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:37.126032   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:37.126051   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:37.126061   26437 round_trippers.go:580]     Audit-Id: 7eca5ecf-7b80-4cc1-a77f-ad9ea708f0fb
	I0115 09:50:37.126069   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:37.126078   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:37.126088   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:37.126102   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:37.126114   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:37 GMT
	I0115 09:50:37.126608   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:50:37.126992   26437 pod_ready.go:92] pod "kube-scheduler-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 09:50:37.127010   26437 pod_ready.go:81] duration metric: took 171.42978ms waiting for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:50:37.127024   26437 pod_ready.go:38] duration metric: took 3.212756707s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:50:37.127045   26437 api_server.go:52] waiting for apiserver process to appear ...
	I0115 09:50:37.127097   26437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 09:50:37.140043   26437 command_runner.go:130] > 1113
	I0115 09:50:37.140187   26437 api_server.go:72] duration metric: took 7.954589629s to wait for apiserver process to appear ...
	I0115 09:50:37.140206   26437 api_server.go:88] waiting for apiserver healthz status ...
	I0115 09:50:37.140225   26437 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0115 09:50:37.145309   26437 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0115 09:50:37.145362   26437 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0115 09:50:37.145369   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:37.145377   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:37.145385   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:37.146375   26437 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0115 09:50:37.146395   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:37.146401   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:37 GMT
	I0115 09:50:37.146407   26437 round_trippers.go:580]     Audit-Id: e930dc1b-64f0-467a-9ba9-1bc8386391b5
	I0115 09:50:37.146412   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:37.146430   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:37.146436   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:37.146441   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:37.146447   26437 round_trippers.go:580]     Content-Length: 264
	I0115 09:50:37.146463   26437 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0115 09:50:37.146532   26437 api_server.go:141] control plane version: v1.28.4
	I0115 09:50:37.146548   26437 api_server.go:131] duration metric: took 6.33663ms to wait for apiserver health ...
	I0115 09:50:37.146555   26437 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 09:50:37.322948   26437 request.go:629] Waited for 176.331317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 09:50:37.323025   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 09:50:37.323030   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:37.323038   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:37.323047   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:37.326738   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:50:37.326761   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:37.326770   26437 round_trippers.go:580]     Audit-Id: 8a8a6c74-934e-4408-adc5-26dfb7eaa963
	I0115 09:50:37.326780   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:37.326792   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:37.326802   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:37.326811   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:37.326819   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:37 GMT
	I0115 09:50:37.327700   26437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"435","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0115 09:50:37.329303   26437 system_pods.go:59] 8 kube-system pods found
	I0115 09:50:37.329328   26437 system_pods.go:61] "coredns-5dd5756b68-n2sqg" [f303a63a-c959-477e-89d5-c35bd0802b1b] Running
	I0115 09:50:37.329335   26437 system_pods.go:61] "etcd-multinode-975382" [6b8601c3-a366-4171-9221-4b83d091aff7] Running
	I0115 09:50:37.329341   26437 system_pods.go:61] "kindnet-7tf97" [3b9e470b-af37-44cd-8402-6ec9b3340058] Running
	I0115 09:50:37.329346   26437 system_pods.go:61] "kube-apiserver-multinode-975382" [0c174d15-48a9-4394-ba76-207b7cbc42a0] Running
	I0115 09:50:37.329354   26437 system_pods.go:61] "kube-controller-manager-multinode-975382" [0fabcc70-f923-40a7-86b4-70c0cc2213ce] Running
	I0115 09:50:37.329361   26437 system_pods.go:61] "kube-proxy-jgsx4" [a779cea9-5532-4d69-9e49-ac2879e028ec] Running
	I0115 09:50:37.329367   26437 system_pods.go:61] "kube-scheduler-multinode-975382" [d7c93aee-4d7c-4264-8d65-de8781105178] Running
	I0115 09:50:37.329378   26437 system_pods.go:61] "storage-provisioner" [b8eb636d-31de-4a7e-a296-a66493d5a827] Running
	I0115 09:50:37.329385   26437 system_pods.go:74] duration metric: took 182.82427ms to wait for pod list to return data ...
	I0115 09:50:37.329395   26437 default_sa.go:34] waiting for default service account to be created ...
	I0115 09:50:37.522759   26437 request.go:629] Waited for 193.294822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0115 09:50:37.522833   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0115 09:50:37.522838   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:37.522845   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:37.522855   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:37.525839   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:37.525858   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:37.525864   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:37.525870   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:37.525875   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:37.525883   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:37.525888   26437 round_trippers.go:580]     Content-Length: 261
	I0115 09:50:37.525893   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:37 GMT
	I0115 09:50:37.525898   26437 round_trippers.go:580]     Audit-Id: afd386bf-b7c9-4431-8007-e6449f6a2459
	I0115 09:50:37.525924   26437 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"bb2aa1f7-da8f-4785-82a8-74ac34272521","resourceVersion":"360","creationTimestamp":"2024-01-15T09:50:28Z"}}]}
	I0115 09:50:37.526130   26437 default_sa.go:45] found service account: "default"
	I0115 09:50:37.526146   26437 default_sa.go:55] duration metric: took 196.745844ms for default service account to be created ...
	I0115 09:50:37.526155   26437 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 09:50:37.723566   26437 request.go:629] Waited for 197.360537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 09:50:37.723646   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 09:50:37.723654   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:37.723661   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:37.723668   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:37.727148   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:50:37.727172   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:37.727181   26437 round_trippers.go:580]     Audit-Id: 0e39a366-2b9e-4978-8af9-d7bf9cf831d2
	I0115 09:50:37.727190   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:37.727198   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:37.727210   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:37.727222   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:37.727230   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:37 GMT
	I0115 09:50:37.728567   26437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"435","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0115 09:50:37.730238   26437 system_pods.go:86] 8 kube-system pods found
	I0115 09:50:37.730260   26437 system_pods.go:89] "coredns-5dd5756b68-n2sqg" [f303a63a-c959-477e-89d5-c35bd0802b1b] Running
	I0115 09:50:37.730268   26437 system_pods.go:89] "etcd-multinode-975382" [6b8601c3-a366-4171-9221-4b83d091aff7] Running
	I0115 09:50:37.730272   26437 system_pods.go:89] "kindnet-7tf97" [3b9e470b-af37-44cd-8402-6ec9b3340058] Running
	I0115 09:50:37.730276   26437 system_pods.go:89] "kube-apiserver-multinode-975382" [0c174d15-48a9-4394-ba76-207b7cbc42a0] Running
	I0115 09:50:37.730283   26437 system_pods.go:89] "kube-controller-manager-multinode-975382" [0fabcc70-f923-40a7-86b4-70c0cc2213ce] Running
	I0115 09:50:37.730288   26437 system_pods.go:89] "kube-proxy-jgsx4" [a779cea9-5532-4d69-9e49-ac2879e028ec] Running
	I0115 09:50:37.730293   26437 system_pods.go:89] "kube-scheduler-multinode-975382" [d7c93aee-4d7c-4264-8d65-de8781105178] Running
	I0115 09:50:37.730297   26437 system_pods.go:89] "storage-provisioner" [b8eb636d-31de-4a7e-a296-a66493d5a827] Running
	I0115 09:50:37.730302   26437 system_pods.go:126] duration metric: took 204.143736ms to wait for k8s-apps to be running ...
	I0115 09:50:37.730318   26437 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 09:50:37.730357   26437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:50:37.743670   26437 system_svc.go:56] duration metric: took 13.344531ms WaitForService to wait for kubelet.
	I0115 09:50:37.743694   26437 kubeadm.go:581] duration metric: took 8.558097425s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 09:50:37.743715   26437 node_conditions.go:102] verifying NodePressure condition ...
	I0115 09:50:37.923160   26437 request.go:629] Waited for 179.363925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0115 09:50:37.923218   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0115 09:50:37.923225   26437 round_trippers.go:469] Request Headers:
	I0115 09:50:37.923239   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:50:37.923247   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:50:37.926008   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:50:37.926029   26437 round_trippers.go:577] Response Headers:
	I0115 09:50:37.926038   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:50:37.926046   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:50:37.926055   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:50:37.926063   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:50:37 GMT
	I0115 09:50:37.926073   26437 round_trippers.go:580]     Audit-Id: bed49119-270a-4436-aaa3-42952bd4c41e
	I0115 09:50:37.926082   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:50:37.926311   26437 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0115 09:50:37.926740   26437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 09:50:37.926764   26437 node_conditions.go:123] node cpu capacity is 2
	I0115 09:50:37.926773   26437 node_conditions.go:105] duration metric: took 183.054213ms to run NodePressure ...
	I0115 09:50:37.926783   26437 start.go:228] waiting for startup goroutines ...
	I0115 09:50:37.926790   26437 start.go:233] waiting for cluster config update ...
	I0115 09:50:37.926799   26437 start.go:242] writing updated cluster config ...
	I0115 09:50:37.929051   26437 out.go:177] 
	I0115 09:50:37.930706   26437 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:50:37.930770   26437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 09:50:37.932573   26437 out.go:177] * Starting worker node multinode-975382-m02 in cluster multinode-975382
	I0115 09:50:37.934038   26437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:50:37.934057   26437 cache.go:56] Caching tarball of preloaded images
	I0115 09:50:37.934125   26437 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 09:50:37.934137   26437 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 09:50:37.934196   26437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 09:50:37.934340   26437 start.go:365] acquiring machines lock for multinode-975382-m02: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 09:50:37.934378   26437 start.go:369] acquired machines lock for "multinode-975382-m02" in 21.821µs
	I0115 09:50:37.934394   26437 start.go:93] Provisioning new machine with config: &{Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 09:50:37.934475   26437 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0115 09:50:37.936532   26437 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0115 09:50:37.936607   26437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:50:37.936637   26437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:50:37.950250   26437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0115 09:50:37.950622   26437 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:50:37.950991   26437 main.go:141] libmachine: Using API Version  1
	I0115 09:50:37.951008   26437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:50:37.951295   26437 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:50:37.951461   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetMachineName
	I0115 09:50:37.951593   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 09:50:37.951739   26437 start.go:159] libmachine.API.Create for "multinode-975382" (driver="kvm2")
	I0115 09:50:37.951754   26437 client.go:168] LocalClient.Create starting
	I0115 09:50:37.951775   26437 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem
	I0115 09:50:37.951809   26437 main.go:141] libmachine: Decoding PEM data...
	I0115 09:50:37.951825   26437 main.go:141] libmachine: Parsing certificate...
	I0115 09:50:37.951879   26437 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem
	I0115 09:50:37.951900   26437 main.go:141] libmachine: Decoding PEM data...
	I0115 09:50:37.951920   26437 main.go:141] libmachine: Parsing certificate...
	I0115 09:50:37.951937   26437 main.go:141] libmachine: Running pre-create checks...
	I0115 09:50:37.951946   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .PreCreateCheck
	I0115 09:50:37.952082   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetConfigRaw
	I0115 09:50:37.952442   26437 main.go:141] libmachine: Creating machine...
	I0115 09:50:37.952456   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .Create
	I0115 09:50:37.952581   26437 main.go:141] libmachine: (multinode-975382-m02) Creating KVM machine...
	I0115 09:50:37.953757   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found existing default KVM network
	I0115 09:50:37.953881   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found existing private KVM network mk-multinode-975382
	I0115 09:50:37.953977   26437 main.go:141] libmachine: (multinode-975382-m02) Setting up store path in /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02 ...
	I0115 09:50:37.954004   26437 main.go:141] libmachine: (multinode-975382-m02) Building disk image from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 09:50:37.954088   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:37.953972   26799 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:50:37.954144   26437 main.go:141] libmachine: (multinode-975382-m02) Downloading /home/jenkins/minikube-integration/17953-4821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 09:50:38.168930   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:38.168803   26799 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa...
	I0115 09:50:38.573718   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:38.573605   26799 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/multinode-975382-m02.rawdisk...
	I0115 09:50:38.573752   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Writing magic tar header
	I0115 09:50:38.573769   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Writing SSH key tar header
	I0115 09:50:38.573782   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:38.573721   26799 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02 ...
	I0115 09:50:38.573804   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02
	I0115 09:50:38.573870   26437 main.go:141] libmachine: (multinode-975382-m02) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02 (perms=drwx------)
	I0115 09:50:38.573901   26437 main.go:141] libmachine: (multinode-975382-m02) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines (perms=drwxr-xr-x)
	I0115 09:50:38.573920   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines
	I0115 09:50:38.573936   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:50:38.573949   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821
	I0115 09:50:38.573972   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 09:50:38.573992   26437 main.go:141] libmachine: (multinode-975382-m02) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube (perms=drwxr-xr-x)
	I0115 09:50:38.574006   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Checking permissions on dir: /home/jenkins
	I0115 09:50:38.574021   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Checking permissions on dir: /home
	I0115 09:50:38.574031   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Skipping /home - not owner
	I0115 09:50:38.574043   26437 main.go:141] libmachine: (multinode-975382-m02) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821 (perms=drwxrwxr-x)
	I0115 09:50:38.574052   26437 main.go:141] libmachine: (multinode-975382-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 09:50:38.574064   26437 main.go:141] libmachine: (multinode-975382-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 09:50:38.574075   26437 main.go:141] libmachine: (multinode-975382-m02) Creating domain...
	I0115 09:50:38.575004   26437 main.go:141] libmachine: (multinode-975382-m02) define libvirt domain using xml: 
	I0115 09:50:38.575026   26437 main.go:141] libmachine: (multinode-975382-m02) <domain type='kvm'>
	I0115 09:50:38.575035   26437 main.go:141] libmachine: (multinode-975382-m02)   <name>multinode-975382-m02</name>
	I0115 09:50:38.575044   26437 main.go:141] libmachine: (multinode-975382-m02)   <memory unit='MiB'>2200</memory>
	I0115 09:50:38.575054   26437 main.go:141] libmachine: (multinode-975382-m02)   <vcpu>2</vcpu>
	I0115 09:50:38.575059   26437 main.go:141] libmachine: (multinode-975382-m02)   <features>
	I0115 09:50:38.575067   26437 main.go:141] libmachine: (multinode-975382-m02)     <acpi/>
	I0115 09:50:38.575073   26437 main.go:141] libmachine: (multinode-975382-m02)     <apic/>
	I0115 09:50:38.575081   26437 main.go:141] libmachine: (multinode-975382-m02)     <pae/>
	I0115 09:50:38.575089   26437 main.go:141] libmachine: (multinode-975382-m02)     
	I0115 09:50:38.575097   26437 main.go:141] libmachine: (multinode-975382-m02)   </features>
	I0115 09:50:38.575105   26437 main.go:141] libmachine: (multinode-975382-m02)   <cpu mode='host-passthrough'>
	I0115 09:50:38.575111   26437 main.go:141] libmachine: (multinode-975382-m02)   
	I0115 09:50:38.575118   26437 main.go:141] libmachine: (multinode-975382-m02)   </cpu>
	I0115 09:50:38.575144   26437 main.go:141] libmachine: (multinode-975382-m02)   <os>
	I0115 09:50:38.575167   26437 main.go:141] libmachine: (multinode-975382-m02)     <type>hvm</type>
	I0115 09:50:38.575180   26437 main.go:141] libmachine: (multinode-975382-m02)     <boot dev='cdrom'/>
	I0115 09:50:38.575192   26437 main.go:141] libmachine: (multinode-975382-m02)     <boot dev='hd'/>
	I0115 09:50:38.575207   26437 main.go:141] libmachine: (multinode-975382-m02)     <bootmenu enable='no'/>
	I0115 09:50:38.575219   26437 main.go:141] libmachine: (multinode-975382-m02)   </os>
	I0115 09:50:38.575234   26437 main.go:141] libmachine: (multinode-975382-m02)   <devices>
	I0115 09:50:38.575248   26437 main.go:141] libmachine: (multinode-975382-m02)     <disk type='file' device='cdrom'>
	I0115 09:50:38.575269   26437 main.go:141] libmachine: (multinode-975382-m02)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/boot2docker.iso'/>
	I0115 09:50:38.575282   26437 main.go:141] libmachine: (multinode-975382-m02)       <target dev='hdc' bus='scsi'/>
	I0115 09:50:38.575295   26437 main.go:141] libmachine: (multinode-975382-m02)       <readonly/>
	I0115 09:50:38.575311   26437 main.go:141] libmachine: (multinode-975382-m02)     </disk>
	I0115 09:50:38.575327   26437 main.go:141] libmachine: (multinode-975382-m02)     <disk type='file' device='disk'>
	I0115 09:50:38.575342   26437 main.go:141] libmachine: (multinode-975382-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 09:50:38.575363   26437 main.go:141] libmachine: (multinode-975382-m02)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/multinode-975382-m02.rawdisk'/>
	I0115 09:50:38.575376   26437 main.go:141] libmachine: (multinode-975382-m02)       <target dev='hda' bus='virtio'/>
	I0115 09:50:38.575390   26437 main.go:141] libmachine: (multinode-975382-m02)     </disk>
	I0115 09:50:38.575405   26437 main.go:141] libmachine: (multinode-975382-m02)     <interface type='network'>
	I0115 09:50:38.575419   26437 main.go:141] libmachine: (multinode-975382-m02)       <source network='mk-multinode-975382'/>
	I0115 09:50:38.575432   26437 main.go:141] libmachine: (multinode-975382-m02)       <model type='virtio'/>
	I0115 09:50:38.575442   26437 main.go:141] libmachine: (multinode-975382-m02)     </interface>
	I0115 09:50:38.575457   26437 main.go:141] libmachine: (multinode-975382-m02)     <interface type='network'>
	I0115 09:50:38.575474   26437 main.go:141] libmachine: (multinode-975382-m02)       <source network='default'/>
	I0115 09:50:38.575493   26437 main.go:141] libmachine: (multinode-975382-m02)       <model type='virtio'/>
	I0115 09:50:38.575506   26437 main.go:141] libmachine: (multinode-975382-m02)     </interface>
	I0115 09:50:38.575520   26437 main.go:141] libmachine: (multinode-975382-m02)     <serial type='pty'>
	I0115 09:50:38.575533   26437 main.go:141] libmachine: (multinode-975382-m02)       <target port='0'/>
	I0115 09:50:38.575547   26437 main.go:141] libmachine: (multinode-975382-m02)     </serial>
	I0115 09:50:38.575564   26437 main.go:141] libmachine: (multinode-975382-m02)     <console type='pty'>
	I0115 09:50:38.575579   26437 main.go:141] libmachine: (multinode-975382-m02)       <target type='serial' port='0'/>
	I0115 09:50:38.575591   26437 main.go:141] libmachine: (multinode-975382-m02)     </console>
	I0115 09:50:38.575606   26437 main.go:141] libmachine: (multinode-975382-m02)     <rng model='virtio'>
	I0115 09:50:38.575620   26437 main.go:141] libmachine: (multinode-975382-m02)       <backend model='random'>/dev/random</backend>
	I0115 09:50:38.575638   26437 main.go:141] libmachine: (multinode-975382-m02)     </rng>
	I0115 09:50:38.575653   26437 main.go:141] libmachine: (multinode-975382-m02)     
	I0115 09:50:38.575675   26437 main.go:141] libmachine: (multinode-975382-m02)     
	I0115 09:50:38.575694   26437 main.go:141] libmachine: (multinode-975382-m02)   </devices>
	I0115 09:50:38.575710   26437 main.go:141] libmachine: (multinode-975382-m02) </domain>
	I0115 09:50:38.575719   26437 main.go:141] libmachine: (multinode-975382-m02) 
	I0115 09:50:38.582675   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:cb:79:6a in network default
	I0115 09:50:38.583234   26437 main.go:141] libmachine: (multinode-975382-m02) Ensuring networks are active...
	I0115 09:50:38.583260   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:38.583891   26437 main.go:141] libmachine: (multinode-975382-m02) Ensuring network default is active
	I0115 09:50:38.584239   26437 main.go:141] libmachine: (multinode-975382-m02) Ensuring network mk-multinode-975382 is active
	I0115 09:50:38.584572   26437 main.go:141] libmachine: (multinode-975382-m02) Getting domain xml...
	I0115 09:50:38.585449   26437 main.go:141] libmachine: (multinode-975382-m02) Creating domain...
	I0115 09:50:39.756128   26437 main.go:141] libmachine: (multinode-975382-m02) Waiting to get IP...
	I0115 09:50:39.756976   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:39.757327   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:39.757350   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:39.757301   26799 retry.go:31] will retry after 199.322417ms: waiting for machine to come up
	I0115 09:50:39.958610   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:39.958993   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:39.959023   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:39.958943   26799 retry.go:31] will retry after 328.784598ms: waiting for machine to come up
	I0115 09:50:40.289335   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:40.289733   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:40.289773   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:40.289703   26799 retry.go:31] will retry after 384.179727ms: waiting for machine to come up
	I0115 09:50:40.675245   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:40.675774   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:40.675806   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:40.675726   26799 retry.go:31] will retry after 422.746745ms: waiting for machine to come up
	I0115 09:50:41.100240   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:41.100636   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:41.100668   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:41.100582   26799 retry.go:31] will retry after 529.010446ms: waiting for machine to come up
	I0115 09:50:41.631185   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:41.631566   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:41.631598   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:41.631518   26799 retry.go:31] will retry after 847.728518ms: waiting for machine to come up
	I0115 09:50:42.480330   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:42.480776   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:42.480809   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:42.480725   26799 retry.go:31] will retry after 723.566324ms: waiting for machine to come up
	I0115 09:50:43.205599   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:43.205933   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:43.205962   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:43.205881   26799 retry.go:31] will retry after 1.075146278s: waiting for machine to come up
	I0115 09:50:44.282203   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:44.282624   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:44.282679   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:44.282587   26799 retry.go:31] will retry after 1.793734935s: waiting for machine to come up
	I0115 09:50:46.078550   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:46.078908   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:46.078943   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:46.078880   26799 retry.go:31] will retry after 2.264540941s: waiting for machine to come up
	I0115 09:50:48.345142   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:48.345524   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:48.345555   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:48.345468   26799 retry.go:31] will retry after 1.758607854s: waiting for machine to come up
	I0115 09:50:50.105861   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:50.106315   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:50.106339   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:50.106290   26799 retry.go:31] will retry after 2.207914307s: waiting for machine to come up
	I0115 09:50:52.315404   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:52.315828   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:52.315850   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:52.315787   26799 retry.go:31] will retry after 4.192329056s: waiting for machine to come up
	I0115 09:50:56.509265   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:50:56.509730   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find current IP address of domain multinode-975382-m02 in network mk-multinode-975382
	I0115 09:50:56.509765   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | I0115 09:50:56.509650   26799 retry.go:31] will retry after 5.008559694s: waiting for machine to come up
	I0115 09:51:01.519446   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.519880   26437 main.go:141] libmachine: (multinode-975382-m02) Found IP for machine: 192.168.39.95
	I0115 09:51:01.519913   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has current primary IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.519923   26437 main.go:141] libmachine: (multinode-975382-m02) Reserving static IP address...
	I0115 09:51:01.520282   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | unable to find host DHCP lease matching {name: "multinode-975382-m02", mac: "52:54:00:e1:55:d5", ip: "192.168.39.95"} in network mk-multinode-975382
	I0115 09:51:01.591618   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Getting to WaitForSSH function...
	I0115 09:51:01.591654   26437 main.go:141] libmachine: (multinode-975382-m02) Reserved static IP address: 192.168.39.95
	I0115 09:51:01.591668   26437 main.go:141] libmachine: (multinode-975382-m02) Waiting for SSH to be available...
	I0115 09:51:01.594358   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.594889   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:01.594920   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.595029   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Using SSH client type: external
	I0115 09:51:01.595058   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa (-rw-------)
	I0115 09:51:01.595096   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 09:51:01.595110   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | About to run SSH command:
	I0115 09:51:01.595130   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | exit 0
	I0115 09:51:01.677773   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | SSH cmd err, output: <nil>: 
	I0115 09:51:01.678032   26437 main.go:141] libmachine: (multinode-975382-m02) KVM machine creation complete!
	I0115 09:51:01.678374   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetConfigRaw
	I0115 09:51:01.678909   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 09:51:01.679120   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 09:51:01.679302   26437 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 09:51:01.679321   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetState
	I0115 09:51:01.680520   26437 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 09:51:01.680536   26437 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 09:51:01.680543   26437 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 09:51:01.680549   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:01.682883   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.683260   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:01.683283   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.683441   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:01.683596   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:01.683760   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:01.683936   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:01.684126   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:51:01.684603   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 09:51:01.684621   26437 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 09:51:01.789340   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:51:01.789364   26437 main.go:141] libmachine: Detecting the provisioner...
	I0115 09:51:01.789374   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:01.791871   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.792175   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:01.792203   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.792363   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:01.792558   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:01.792731   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:01.792873   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:01.793031   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:51:01.793346   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 09:51:01.793359   26437 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 09:51:01.906891   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 09:51:01.906986   26437 main.go:141] libmachine: found compatible host: buildroot
	I0115 09:51:01.907001   26437 main.go:141] libmachine: Provisioning with buildroot...
	I0115 09:51:01.907010   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetMachineName
	I0115 09:51:01.907302   26437 buildroot.go:166] provisioning hostname "multinode-975382-m02"
	I0115 09:51:01.907351   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetMachineName
	I0115 09:51:01.907554   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:01.909893   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.910208   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:01.910232   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:01.910381   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:01.910576   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:01.910739   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:01.910850   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:01.911008   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:51:01.911355   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 09:51:01.911370   26437 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-975382-m02 && echo "multinode-975382-m02" | sudo tee /etc/hostname
	I0115 09:51:02.032183   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-975382-m02
	
	I0115 09:51:02.032211   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:02.034546   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.034920   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.034963   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.035076   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:02.035279   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.035432   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.035547   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:02.035679   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:51:02.035983   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 09:51:02.036000   26437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-975382-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-975382-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-975382-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 09:51:02.152133   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:51:02.152169   26437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 09:51:02.152190   26437 buildroot.go:174] setting up certificates
	I0115 09:51:02.152207   26437 provision.go:83] configureAuth start
	I0115 09:51:02.152222   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetMachineName
	I0115 09:51:02.152463   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetIP
	I0115 09:51:02.155133   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.155423   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.155451   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.155584   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:02.157744   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.158057   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.158103   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.158178   26437 provision.go:138] copyHostCerts
	I0115 09:51:02.158201   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 09:51:02.158228   26437 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 09:51:02.158237   26437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 09:51:02.158290   26437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 09:51:02.158351   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 09:51:02.158367   26437 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 09:51:02.158373   26437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 09:51:02.158394   26437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 09:51:02.158459   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 09:51:02.158478   26437 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 09:51:02.158484   26437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 09:51:02.158516   26437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 09:51:02.158582   26437 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.multinode-975382-m02 san=[192.168.39.95 192.168.39.95 localhost 127.0.0.1 minikube multinode-975382-m02]
	I0115 09:51:02.256934   26437 provision.go:172] copyRemoteCerts
	I0115 09:51:02.256987   26437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 09:51:02.257009   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:02.259616   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.259927   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.259958   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.260106   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:02.260308   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.260475   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:02.260605   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa Username:docker}
	I0115 09:51:02.343194   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 09:51:02.343281   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 09:51:02.365876   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 09:51:02.365952   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0115 09:51:02.388058   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 09:51:02.388116   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 09:51:02.409975   26437 provision.go:86] duration metric: configureAuth took 257.757067ms
	I0115 09:51:02.409995   26437 buildroot.go:189] setting minikube options for container-runtime
	I0115 09:51:02.410190   26437 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:51:02.410278   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:02.412755   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.413180   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.413211   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.413408   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:02.413626   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.413795   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.413948   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:02.414111   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:51:02.414577   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 09:51:02.414600   26437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 09:51:02.724150   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 09:51:02.724182   26437 main.go:141] libmachine: Checking connection to Docker...
	I0115 09:51:02.724192   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetURL
	I0115 09:51:02.725417   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | Using libvirt version 6000000
	I0115 09:51:02.727490   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.727802   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.727830   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.727960   26437 main.go:141] libmachine: Docker is up and running!
	I0115 09:51:02.727974   26437 main.go:141] libmachine: Reticulating splines...
	I0115 09:51:02.727980   26437 client.go:171] LocalClient.Create took 24.776220344s
	I0115 09:51:02.727994   26437 start.go:167] duration metric: libmachine.API.Create for "multinode-975382" took 24.776256172s
	I0115 09:51:02.728003   26437 start.go:300] post-start starting for "multinode-975382-m02" (driver="kvm2")
	I0115 09:51:02.728013   26437 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 09:51:02.728027   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 09:51:02.728246   26437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 09:51:02.728269   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:02.730301   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.730626   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.730653   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.730723   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:02.730876   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.731025   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:02.731169   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa Username:docker}
	I0115 09:51:02.820213   26437 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 09:51:02.824098   26437 command_runner.go:130] > NAME=Buildroot
	I0115 09:51:02.824123   26437 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0115 09:51:02.824130   26437 command_runner.go:130] > ID=buildroot
	I0115 09:51:02.824137   26437 command_runner.go:130] > VERSION_ID=2021.02.12
	I0115 09:51:02.824145   26437 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0115 09:51:02.824386   26437 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 09:51:02.824405   26437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 09:51:02.824467   26437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 09:51:02.824553   26437 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 09:51:02.824563   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /etc/ssl/certs/134822.pem
	I0115 09:51:02.824660   26437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 09:51:02.833370   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 09:51:02.854435   26437 start.go:303] post-start completed in 126.396091ms
	I0115 09:51:02.854476   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetConfigRaw
	I0115 09:51:02.855023   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetIP
	I0115 09:51:02.857316   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.857672   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.857705   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.857908   26437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 09:51:02.858089   26437 start.go:128] duration metric: createHost completed in 24.923604499s
	I0115 09:51:02.858114   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:02.860367   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.860733   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.860758   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.860906   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:02.861094   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.861249   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.861402   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:02.861565   26437 main.go:141] libmachine: Using SSH client type: native
	I0115 09:51:02.861875   26437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 09:51:02.861886   26437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 09:51:02.974742   26437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705312262.946508968
	
	I0115 09:51:02.974758   26437 fix.go:206] guest clock: 1705312262.946508968
	I0115 09:51:02.974764   26437 fix.go:219] Guest: 2024-01-15 09:51:02.946508968 +0000 UTC Remote: 2024-01-15 09:51:02.858101211 +0000 UTC m=+91.694176762 (delta=88.407757ms)
	I0115 09:51:02.974779   26437 fix.go:190] guest clock delta is within tolerance: 88.407757ms
	I0115 09:51:02.974783   26437 start.go:83] releasing machines lock for "multinode-975382-m02", held for 25.040397173s
	I0115 09:51:02.974800   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 09:51:02.975027   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetIP
	I0115 09:51:02.977433   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.977821   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.977854   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.980020   26437 out.go:177] * Found network options:
	I0115 09:51:02.981346   26437 out.go:177]   - NO_PROXY=192.168.39.217
	W0115 09:51:02.982611   26437 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 09:51:02.982660   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 09:51:02.983137   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 09:51:02.983281   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 09:51:02.983341   26437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 09:51:02.983374   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	W0115 09:51:02.983458   26437 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 09:51:02.983533   26437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 09:51:02.983556   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:51:02.985737   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.986000   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.986075   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.986104   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.986276   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:02.986364   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:02.986393   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:02.986471   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.986539   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:51:02.986621   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:02.986680   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:51:02.986744   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa Username:docker}
	I0115 09:51:02.986800   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:51:02.986913   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa Username:docker}
	I0115 09:51:03.219701   26437 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0115 09:51:03.219744   26437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 09:51:03.226041   26437 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0115 09:51:03.226399   26437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 09:51:03.226483   26437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:51:03.240794   26437 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0115 09:51:03.240988   26437 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 09:51:03.241007   26437 start.go:475] detecting cgroup driver to use...
	I0115 09:51:03.241060   26437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 09:51:03.254637   26437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 09:51:03.266784   26437 docker.go:217] disabling cri-docker service (if available) ...
	I0115 09:51:03.266839   26437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 09:51:03.279076   26437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 09:51:03.290117   26437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 09:51:03.389713   26437 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0115 09:51:03.389804   26437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 09:51:03.403230   26437 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0115 09:51:03.508349   26437 docker.go:233] disabling docker service ...
	I0115 09:51:03.508431   26437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 09:51:03.521711   26437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 09:51:03.533400   26437 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0115 09:51:03.533490   26437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 09:51:03.633210   26437 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0115 09:51:03.633298   26437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 09:51:03.645567   26437 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0115 09:51:03.645954   26437 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0115 09:51:03.730502   26437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 09:51:03.743045   26437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 09:51:03.760414   26437 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0115 09:51:03.760778   26437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 09:51:03.760827   26437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:51:03.770579   26437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 09:51:03.770645   26437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:51:03.780918   26437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:51:03.791465   26437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:51:03.801644   26437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 09:51:03.811805   26437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 09:51:03.820323   26437 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 09:51:03.820578   26437 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 09:51:03.820635   26437 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 09:51:03.833746   26437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 09:51:03.842823   26437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 09:51:03.952415   26437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 09:51:04.108019   26437 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 09:51:04.108105   26437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 09:51:04.112617   26437 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0115 09:51:04.112635   26437 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0115 09:51:04.112642   26437 command_runner.go:130] > Device: 16h/22d	Inode: 773         Links: 1
	I0115 09:51:04.112649   26437 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:51:04.112653   26437 command_runner.go:130] > Access: 2024-01-15 09:51:04.068478881 +0000
	I0115 09:51:04.112660   26437 command_runner.go:130] > Modify: 2024-01-15 09:51:04.068478881 +0000
	I0115 09:51:04.112669   26437 command_runner.go:130] > Change: 2024-01-15 09:51:04.068478881 +0000
	I0115 09:51:04.112679   26437 command_runner.go:130] >  Birth: -
	I0115 09:51:04.112836   26437 start.go:543] Will wait 60s for crictl version
	I0115 09:51:04.112887   26437 ssh_runner.go:195] Run: which crictl
	I0115 09:51:04.119233   26437 command_runner.go:130] > /usr/bin/crictl
	I0115 09:51:04.119351   26437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 09:51:04.155979   26437 command_runner.go:130] > Version:  0.1.0
	I0115 09:51:04.156005   26437 command_runner.go:130] > RuntimeName:  cri-o
	I0115 09:51:04.156012   26437 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0115 09:51:04.156020   26437 command_runner.go:130] > RuntimeApiVersion:  v1
	I0115 09:51:04.156073   26437 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 09:51:04.156149   26437 ssh_runner.go:195] Run: crio --version
	I0115 09:51:04.205726   26437 command_runner.go:130] > crio version 1.24.1
	I0115 09:51:04.205752   26437 command_runner.go:130] > Version:          1.24.1
	I0115 09:51:04.205760   26437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 09:51:04.205764   26437 command_runner.go:130] > GitTreeState:     dirty
	I0115 09:51:04.205769   26437 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 09:51:04.205774   26437 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 09:51:04.205778   26437 command_runner.go:130] > Compiler:         gc
	I0115 09:51:04.205782   26437 command_runner.go:130] > Platform:         linux/amd64
	I0115 09:51:04.205787   26437 command_runner.go:130] > Linkmode:         dynamic
	I0115 09:51:04.205793   26437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 09:51:04.205798   26437 command_runner.go:130] > SeccompEnabled:   true
	I0115 09:51:04.205802   26437 command_runner.go:130] > AppArmorEnabled:  false
	I0115 09:51:04.207015   26437 ssh_runner.go:195] Run: crio --version
	I0115 09:51:04.248882   26437 command_runner.go:130] > crio version 1.24.1
	I0115 09:51:04.248907   26437 command_runner.go:130] > Version:          1.24.1
	I0115 09:51:04.248920   26437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 09:51:04.248933   26437 command_runner.go:130] > GitTreeState:     dirty
	I0115 09:51:04.248943   26437 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 09:51:04.248949   26437 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 09:51:04.248955   26437 command_runner.go:130] > Compiler:         gc
	I0115 09:51:04.248964   26437 command_runner.go:130] > Platform:         linux/amd64
	I0115 09:51:04.248973   26437 command_runner.go:130] > Linkmode:         dynamic
	I0115 09:51:04.248985   26437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 09:51:04.248993   26437 command_runner.go:130] > SeccompEnabled:   true
	I0115 09:51:04.249001   26437 command_runner.go:130] > AppArmorEnabled:  false
	I0115 09:51:04.251603   26437 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 09:51:04.252878   26437 out.go:177]   - env NO_PROXY=192.168.39.217
	I0115 09:51:04.254130   26437 main.go:141] libmachine: (multinode-975382-m02) Calling .GetIP
	I0115 09:51:04.256837   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:04.257178   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:51:04.257191   26437 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:51:04.257407   26437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 09:51:04.261441   26437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:51:04.273847   26437 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382 for IP: 192.168.39.95
	I0115 09:51:04.273877   26437 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:51:04.274027   26437 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 09:51:04.274072   26437 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 09:51:04.274085   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 09:51:04.274099   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 09:51:04.274111   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 09:51:04.274122   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 09:51:04.274166   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 09:51:04.274191   26437 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 09:51:04.274204   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 09:51:04.274227   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 09:51:04.274251   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 09:51:04.274272   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 09:51:04.274308   26437 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 09:51:04.274332   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /usr/share/ca-certificates/134822.pem
	I0115 09:51:04.274342   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:51:04.274352   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem -> /usr/share/ca-certificates/13482.pem
	I0115 09:51:04.274738   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 09:51:04.297433   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 09:51:04.318985   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 09:51:04.340839   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 09:51:04.363003   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 09:51:04.386986   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 09:51:04.408961   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 09:51:04.430940   26437 ssh_runner.go:195] Run: openssl version
	I0115 09:51:04.436660   26437 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0115 09:51:04.436718   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 09:51:04.445636   26437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:51:04.449698   26437 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:51:04.449738   26437 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:51:04.449775   26437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:51:04.454665   26437 command_runner.go:130] > b5213941
	I0115 09:51:04.454908   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 09:51:04.463519   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 09:51:04.472062   26437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 09:51:04.476659   26437 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 09:51:04.476691   26437 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 09:51:04.476727   26437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 09:51:04.482189   26437 command_runner.go:130] > 51391683
	I0115 09:51:04.482239   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 09:51:04.490936   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 09:51:04.502244   26437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 09:51:04.507139   26437 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 09:51:04.507162   26437 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 09:51:04.507202   26437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 09:51:04.512376   26437 command_runner.go:130] > 3ec20f2e
	I0115 09:51:04.512437   26437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 09:51:04.521995   26437 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 09:51:04.525893   26437 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:51:04.525929   26437 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:51:04.526003   26437 ssh_runner.go:195] Run: crio config
	I0115 09:51:04.576864   26437 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0115 09:51:04.576912   26437 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0115 09:51:04.576924   26437 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0115 09:51:04.576930   26437 command_runner.go:130] > #
	I0115 09:51:04.576947   26437 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0115 09:51:04.576957   26437 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0115 09:51:04.576968   26437 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0115 09:51:04.576982   26437 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0115 09:51:04.576994   26437 command_runner.go:130] > # reload'.
	I0115 09:51:04.577003   26437 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0115 09:51:04.577013   26437 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0115 09:51:04.577022   26437 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0115 09:51:04.577032   26437 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0115 09:51:04.577038   26437 command_runner.go:130] > [crio]
	I0115 09:51:04.577048   26437 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0115 09:51:04.577060   26437 command_runner.go:130] > # containers images, in this directory.
	I0115 09:51:04.577068   26437 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0115 09:51:04.577083   26437 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0115 09:51:04.577101   26437 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0115 09:51:04.577111   26437 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0115 09:51:04.577121   26437 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0115 09:51:04.577132   26437 command_runner.go:130] > storage_driver = "overlay"
	I0115 09:51:04.577141   26437 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0115 09:51:04.577155   26437 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0115 09:51:04.577162   26437 command_runner.go:130] > storage_option = [
	I0115 09:51:04.577171   26437 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0115 09:51:04.577176   26437 command_runner.go:130] > ]
	I0115 09:51:04.577187   26437 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0115 09:51:04.577197   26437 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0115 09:51:04.577208   26437 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0115 09:51:04.577218   26437 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0115 09:51:04.577231   26437 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0115 09:51:04.577241   26437 command_runner.go:130] > # always happen on a node reboot
	I0115 09:51:04.577250   26437 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0115 09:51:04.577263   26437 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0115 09:51:04.577278   26437 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0115 09:51:04.577300   26437 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0115 09:51:04.577318   26437 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0115 09:51:04.577329   26437 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0115 09:51:04.577344   26437 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0115 09:51:04.577354   26437 command_runner.go:130] > # internal_wipe = true
	I0115 09:51:04.577366   26437 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0115 09:51:04.577378   26437 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0115 09:51:04.577391   26437 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0115 09:51:04.577402   26437 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0115 09:51:04.577415   26437 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0115 09:51:04.577425   26437 command_runner.go:130] > [crio.api]
	I0115 09:51:04.577435   26437 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0115 09:51:04.577446   26437 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0115 09:51:04.577458   26437 command_runner.go:130] > # IP address on which the stream server will listen.
	I0115 09:51:04.577466   26437 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0115 09:51:04.577481   26437 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0115 09:51:04.577494   26437 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0115 09:51:04.577504   26437 command_runner.go:130] > # stream_port = "0"
	I0115 09:51:04.577516   26437 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0115 09:51:04.577526   26437 command_runner.go:130] > # stream_enable_tls = false
	I0115 09:51:04.577538   26437 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0115 09:51:04.577552   26437 command_runner.go:130] > # stream_idle_timeout = ""
	I0115 09:51:04.577564   26437 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0115 09:51:04.577578   26437 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0115 09:51:04.577588   26437 command_runner.go:130] > # minutes.
	I0115 09:51:04.577594   26437 command_runner.go:130] > # stream_tls_cert = ""
	I0115 09:51:04.577608   26437 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0115 09:51:04.577622   26437 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0115 09:51:04.577633   26437 command_runner.go:130] > # stream_tls_key = ""
	I0115 09:51:04.577645   26437 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0115 09:51:04.577660   26437 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0115 09:51:04.577674   26437 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0115 09:51:04.577685   26437 command_runner.go:130] > # stream_tls_ca = ""
	I0115 09:51:04.577699   26437 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 09:51:04.577710   26437 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0115 09:51:04.577722   26437 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 09:51:04.577732   26437 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0115 09:51:04.577748   26437 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0115 09:51:04.577761   26437 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0115 09:51:04.577771   26437 command_runner.go:130] > [crio.runtime]
	I0115 09:51:04.577780   26437 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0115 09:51:04.577792   26437 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0115 09:51:04.577800   26437 command_runner.go:130] > # "nofile=1024:2048"
	I0115 09:51:04.577813   26437 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0115 09:51:04.577820   26437 command_runner.go:130] > # default_ulimits = [
	I0115 09:51:04.577829   26437 command_runner.go:130] > # ]
	I0115 09:51:04.577839   26437 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0115 09:51:04.577849   26437 command_runner.go:130] > # no_pivot = false
	I0115 09:51:04.577859   26437 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0115 09:51:04.577873   26437 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0115 09:51:04.577885   26437 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0115 09:51:04.577898   26437 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0115 09:51:04.577911   26437 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0115 09:51:04.577926   26437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 09:51:04.577942   26437 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0115 09:51:04.577953   26437 command_runner.go:130] > # Cgroup setting for conmon
	I0115 09:51:04.577963   26437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0115 09:51:04.577972   26437 command_runner.go:130] > conmon_cgroup = "pod"
	I0115 09:51:04.577982   26437 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0115 09:51:04.577993   26437 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0115 09:51:04.578006   26437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 09:51:04.578013   26437 command_runner.go:130] > conmon_env = [
	I0115 09:51:04.578026   26437 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0115 09:51:04.578034   26437 command_runner.go:130] > ]
	I0115 09:51:04.578042   26437 command_runner.go:130] > # Additional environment variables to set for all the
	I0115 09:51:04.578053   26437 command_runner.go:130] > # containers. These are overridden if set in the
	I0115 09:51:04.578066   26437 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0115 09:51:04.578075   26437 command_runner.go:130] > # default_env = [
	I0115 09:51:04.578084   26437 command_runner.go:130] > # ]
	I0115 09:51:04.578096   26437 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0115 09:51:04.578103   26437 command_runner.go:130] > # selinux = false
	I0115 09:51:04.578114   26437 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0115 09:51:04.578127   26437 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0115 09:51:04.578139   26437 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0115 09:51:04.578149   26437 command_runner.go:130] > # seccomp_profile = ""
	I0115 09:51:04.578159   26437 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0115 09:51:04.578171   26437 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0115 09:51:04.578182   26437 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0115 09:51:04.578192   26437 command_runner.go:130] > # which might increase security.
	I0115 09:51:04.578203   26437 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0115 09:51:04.578216   26437 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0115 09:51:04.578230   26437 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0115 09:51:04.578242   26437 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0115 09:51:04.578255   26437 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0115 09:51:04.578267   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:51:04.578278   26437 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0115 09:51:04.578289   26437 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0115 09:51:04.578299   26437 command_runner.go:130] > # the cgroup blockio controller.
	I0115 09:51:04.578309   26437 command_runner.go:130] > # blockio_config_file = ""
	I0115 09:51:04.578324   26437 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0115 09:51:04.578336   26437 command_runner.go:130] > # irqbalance daemon.
	I0115 09:51:04.578350   26437 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0115 09:51:04.578363   26437 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0115 09:51:04.578374   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:51:04.578384   26437 command_runner.go:130] > # rdt_config_file = ""
	I0115 09:51:04.578398   26437 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0115 09:51:04.578410   26437 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0115 09:51:04.578431   26437 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0115 09:51:04.578442   26437 command_runner.go:130] > # separate_pull_cgroup = ""
	I0115 09:51:04.578452   26437 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0115 09:51:04.578467   26437 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0115 09:51:04.578477   26437 command_runner.go:130] > # will be added.
	I0115 09:51:04.578485   26437 command_runner.go:130] > # default_capabilities = [
	I0115 09:51:04.578495   26437 command_runner.go:130] > # 	"CHOWN",
	I0115 09:51:04.578505   26437 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0115 09:51:04.578515   26437 command_runner.go:130] > # 	"FSETID",
	I0115 09:51:04.578522   26437 command_runner.go:130] > # 	"FOWNER",
	I0115 09:51:04.578533   26437 command_runner.go:130] > # 	"SETGID",
	I0115 09:51:04.578540   26437 command_runner.go:130] > # 	"SETUID",
	I0115 09:51:04.578550   26437 command_runner.go:130] > # 	"SETPCAP",
	I0115 09:51:04.578557   26437 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0115 09:51:04.578565   26437 command_runner.go:130] > # 	"KILL",
	I0115 09:51:04.578571   26437 command_runner.go:130] > # ]
	I0115 09:51:04.578584   26437 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0115 09:51:04.578596   26437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 09:51:04.578606   26437 command_runner.go:130] > # default_sysctls = [
	I0115 09:51:04.578611   26437 command_runner.go:130] > # ]
	I0115 09:51:04.578623   26437 command_runner.go:130] > # List of devices on the host that a
	I0115 09:51:04.578633   26437 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0115 09:51:04.578643   26437 command_runner.go:130] > # allowed_devices = [
	I0115 09:51:04.578649   26437 command_runner.go:130] > # 	"/dev/fuse",
	I0115 09:51:04.578658   26437 command_runner.go:130] > # ]
	I0115 09:51:04.578666   26437 command_runner.go:130] > # List of additional devices. specified as
	I0115 09:51:04.578680   26437 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0115 09:51:04.578692   26437 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0115 09:51:04.578717   26437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 09:51:04.578729   26437 command_runner.go:130] > # additional_devices = [
	I0115 09:51:04.578739   26437 command_runner.go:130] > # ]
	I0115 09:51:04.578749   26437 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0115 09:51:04.578759   26437 command_runner.go:130] > # cdi_spec_dirs = [
	I0115 09:51:04.578766   26437 command_runner.go:130] > # 	"/etc/cdi",
	I0115 09:51:04.578776   26437 command_runner.go:130] > # 	"/var/run/cdi",
	I0115 09:51:04.578785   26437 command_runner.go:130] > # ]
	I0115 09:51:04.578795   26437 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0115 09:51:04.578809   26437 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0115 09:51:04.578819   26437 command_runner.go:130] > # Defaults to false.
	I0115 09:51:04.578830   26437 command_runner.go:130] > # device_ownership_from_security_context = false
	I0115 09:51:04.578846   26437 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0115 09:51:04.578859   26437 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0115 09:51:04.578868   26437 command_runner.go:130] > # hooks_dir = [
	I0115 09:51:04.578878   26437 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0115 09:51:04.578887   26437 command_runner.go:130] > # ]
	I0115 09:51:04.578898   26437 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0115 09:51:04.578912   26437 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0115 09:51:04.578924   26437 command_runner.go:130] > # its default mounts from the following two files:
	I0115 09:51:04.578932   26437 command_runner.go:130] > #
	I0115 09:51:04.578948   26437 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0115 09:51:04.578962   26437 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0115 09:51:04.578975   26437 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0115 09:51:04.578981   26437 command_runner.go:130] > #
	I0115 09:51:04.578991   26437 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0115 09:51:04.579005   26437 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0115 09:51:04.579019   26437 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0115 09:51:04.579032   26437 command_runner.go:130] > #      only add mounts it finds in this file.
	I0115 09:51:04.579041   26437 command_runner.go:130] > #
	I0115 09:51:04.579049   26437 command_runner.go:130] > # default_mounts_file = ""
	I0115 09:51:04.579060   26437 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0115 09:51:04.579074   26437 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0115 09:51:04.579084   26437 command_runner.go:130] > pids_limit = 1024
	I0115 09:51:04.579093   26437 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0115 09:51:04.579107   26437 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0115 09:51:04.579120   26437 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0115 09:51:04.579137   26437 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0115 09:51:04.579147   26437 command_runner.go:130] > # log_size_max = -1
	I0115 09:51:04.579160   26437 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0115 09:51:04.579169   26437 command_runner.go:130] > # log_to_journald = false
	I0115 09:51:04.579180   26437 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0115 09:51:04.579193   26437 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0115 09:51:04.579202   26437 command_runner.go:130] > # Path to directory for container attach sockets.
	I0115 09:51:04.579214   26437 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0115 09:51:04.579226   26437 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0115 09:51:04.579236   26437 command_runner.go:130] > # bind_mount_prefix = ""
	I0115 09:51:04.579248   26437 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0115 09:51:04.579258   26437 command_runner.go:130] > # read_only = false
	I0115 09:51:04.579271   26437 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0115 09:51:04.579285   26437 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0115 09:51:04.579294   26437 command_runner.go:130] > # live configuration reload.
	I0115 09:51:04.579304   26437 command_runner.go:130] > # log_level = "info"
	I0115 09:51:04.579316   26437 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0115 09:51:04.579329   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:51:04.579338   26437 command_runner.go:130] > # log_filter = ""
	I0115 09:51:04.579349   26437 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0115 09:51:04.579361   26437 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0115 09:51:04.579372   26437 command_runner.go:130] > # separated by comma.
	I0115 09:51:04.579382   26437 command_runner.go:130] > # uid_mappings = ""
	I0115 09:51:04.579392   26437 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0115 09:51:04.579405   26437 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0115 09:51:04.579414   26437 command_runner.go:130] > # separated by comma.
	I0115 09:51:04.579422   26437 command_runner.go:130] > # gid_mappings = ""
	I0115 09:51:04.579436   26437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0115 09:51:04.579451   26437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 09:51:04.579466   26437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 09:51:04.579478   26437 command_runner.go:130] > # minimum_mappable_uid = -1
	I0115 09:51:04.579489   26437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0115 09:51:04.579504   26437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 09:51:04.579518   26437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 09:51:04.579529   26437 command_runner.go:130] > # minimum_mappable_gid = -1
	I0115 09:51:04.579545   26437 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0115 09:51:04.579558   26437 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0115 09:51:04.579571   26437 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0115 09:51:04.579578   26437 command_runner.go:130] > # ctr_stop_timeout = 30
	I0115 09:51:04.579591   26437 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0115 09:51:04.579602   26437 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0115 09:51:04.579611   26437 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0115 09:51:04.579624   26437 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0115 09:51:04.579633   26437 command_runner.go:130] > drop_infra_ctr = false
	I0115 09:51:04.579644   26437 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0115 09:51:04.579656   26437 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0115 09:51:04.579667   26437 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0115 09:51:04.579679   26437 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0115 09:51:04.579689   26437 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0115 09:51:04.579700   26437 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0115 09:51:04.579707   26437 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0115 09:51:04.579721   26437 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0115 09:51:04.579732   26437 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0115 09:51:04.579741   26437 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0115 09:51:04.579755   26437 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0115 09:51:04.579767   26437 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0115 09:51:04.579778   26437 command_runner.go:130] > # default_runtime = "runc"
	I0115 09:51:04.579789   26437 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0115 09:51:04.579805   26437 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0115 09:51:04.579823   26437 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0115 09:51:04.579834   26437 command_runner.go:130] > # creation as a file is not desired either.
	I0115 09:51:04.579851   26437 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0115 09:51:04.579860   26437 command_runner.go:130] > # the hostname is being managed dynamically.
	I0115 09:51:04.579871   26437 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0115 09:51:04.579876   26437 command_runner.go:130] > # ]
	I0115 09:51:04.579885   26437 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0115 09:51:04.579897   26437 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0115 09:51:04.579908   26437 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0115 09:51:04.579918   26437 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0115 09:51:04.579924   26437 command_runner.go:130] > #
	I0115 09:51:04.579941   26437 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0115 09:51:04.579952   26437 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0115 09:51:04.579962   26437 command_runner.go:130] > #  runtime_type = "oci"
	I0115 09:51:04.579970   26437 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0115 09:51:04.579979   26437 command_runner.go:130] > #  privileged_without_host_devices = false
	I0115 09:51:04.579990   26437 command_runner.go:130] > #  allowed_annotations = []
	I0115 09:51:04.579999   26437 command_runner.go:130] > # Where:
	I0115 09:51:04.580010   26437 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0115 09:51:04.580023   26437 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0115 09:51:04.580033   26437 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0115 09:51:04.580046   26437 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0115 09:51:04.580054   26437 command_runner.go:130] > #   in $PATH.
	I0115 09:51:04.580064   26437 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0115 09:51:04.580075   26437 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0115 09:51:04.580086   26437 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0115 09:51:04.580095   26437 command_runner.go:130] > #   state.
	I0115 09:51:04.580105   26437 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0115 09:51:04.580120   26437 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0115 09:51:04.580130   26437 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0115 09:51:04.580142   26437 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0115 09:51:04.580155   26437 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0115 09:51:04.580168   26437 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0115 09:51:04.580178   26437 command_runner.go:130] > #   The currently recognized values are:
	I0115 09:51:04.580191   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0115 09:51:04.580206   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0115 09:51:04.580218   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0115 09:51:04.580232   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0115 09:51:04.580246   26437 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0115 09:51:04.580262   26437 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0115 09:51:04.580275   26437 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0115 09:51:04.580288   26437 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0115 09:51:04.580299   26437 command_runner.go:130] > #   should be moved to the container's cgroup
	I0115 09:51:04.580307   26437 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0115 09:51:04.580325   26437 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0115 09:51:04.580331   26437 command_runner.go:130] > runtime_type = "oci"
	I0115 09:51:04.580338   26437 command_runner.go:130] > runtime_root = "/run/runc"
	I0115 09:51:04.580346   26437 command_runner.go:130] > runtime_config_path = ""
	I0115 09:51:04.580352   26437 command_runner.go:130] > monitor_path = ""
	I0115 09:51:04.580359   26437 command_runner.go:130] > monitor_cgroup = ""
	I0115 09:51:04.580365   26437 command_runner.go:130] > monitor_exec_cgroup = ""
	I0115 09:51:04.580376   26437 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0115 09:51:04.580383   26437 command_runner.go:130] > # running containers
	I0115 09:51:04.580392   26437 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0115 09:51:04.580404   26437 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0115 09:51:04.580432   26437 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0115 09:51:04.580440   26437 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0115 09:51:04.580448   26437 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0115 09:51:04.580453   26437 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0115 09:51:04.580459   26437 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0115 09:51:04.580465   26437 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0115 09:51:04.580470   26437 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0115 09:51:04.580476   26437 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0115 09:51:04.580483   26437 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0115 09:51:04.580490   26437 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0115 09:51:04.580499   26437 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0115 09:51:04.580508   26437 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0115 09:51:04.580518   26437 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0115 09:51:04.580526   26437 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0115 09:51:04.580537   26437 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0115 09:51:04.580547   26437 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0115 09:51:04.580553   26437 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0115 09:51:04.580562   26437 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0115 09:51:04.580568   26437 command_runner.go:130] > # Example:
	I0115 09:51:04.580573   26437 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0115 09:51:04.580580   26437 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0115 09:51:04.580585   26437 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0115 09:51:04.580592   26437 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0115 09:51:04.580596   26437 command_runner.go:130] > # cpuset = 0
	I0115 09:51:04.580603   26437 command_runner.go:130] > # cpushares = "0-1"
	I0115 09:51:04.580607   26437 command_runner.go:130] > # Where:
	I0115 09:51:04.580614   26437 command_runner.go:130] > # The workload name is workload-type.
	I0115 09:51:04.580621   26437 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0115 09:51:04.580628   26437 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0115 09:51:04.580634   26437 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0115 09:51:04.580643   26437 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0115 09:51:04.580651   26437 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0115 09:51:04.580656   26437 command_runner.go:130] > # 
	I0115 09:51:04.580663   26437 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0115 09:51:04.580669   26437 command_runner.go:130] > #
	I0115 09:51:04.580675   26437 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0115 09:51:04.580683   26437 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0115 09:51:04.580692   26437 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0115 09:51:04.580700   26437 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0115 09:51:04.580708   26437 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0115 09:51:04.580715   26437 command_runner.go:130] > [crio.image]
	I0115 09:51:04.580721   26437 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0115 09:51:04.580727   26437 command_runner.go:130] > # default_transport = "docker://"
	I0115 09:51:04.580733   26437 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0115 09:51:04.580741   26437 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0115 09:51:04.580747   26437 command_runner.go:130] > # global_auth_file = ""
	I0115 09:51:04.580752   26437 command_runner.go:130] > # The image used to instantiate infra containers.
	I0115 09:51:04.580759   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:51:04.580767   26437 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0115 09:51:04.580773   26437 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0115 09:51:04.580781   26437 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0115 09:51:04.580788   26437 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:51:04.580792   26437 command_runner.go:130] > # pause_image_auth_file = ""
	I0115 09:51:04.580800   26437 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0115 09:51:04.580806   26437 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0115 09:51:04.580814   26437 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0115 09:51:04.580823   26437 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0115 09:51:04.580829   26437 command_runner.go:130] > # pause_command = "/pause"
	I0115 09:51:04.580836   26437 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0115 09:51:04.580844   26437 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0115 09:51:04.580852   26437 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0115 09:51:04.580858   26437 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0115 09:51:04.580866   26437 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0115 09:51:04.580870   26437 command_runner.go:130] > # signature_policy = ""
	I0115 09:51:04.580879   26437 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0115 09:51:04.580887   26437 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0115 09:51:04.580893   26437 command_runner.go:130] > # changing them here.
	I0115 09:51:04.580898   26437 command_runner.go:130] > # insecure_registries = [
	I0115 09:51:04.580903   26437 command_runner.go:130] > # ]
	I0115 09:51:04.580910   26437 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0115 09:51:04.580917   26437 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0115 09:51:04.580921   26437 command_runner.go:130] > # image_volumes = "mkdir"
	I0115 09:51:04.580929   26437 command_runner.go:130] > # Temporary directory to use for storing big files
	I0115 09:51:04.580939   26437 command_runner.go:130] > # big_files_temporary_dir = ""
	I0115 09:51:04.580949   26437 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0115 09:51:04.580953   26437 command_runner.go:130] > # CNI plugins.
	I0115 09:51:04.580957   26437 command_runner.go:130] > [crio.network]
	I0115 09:51:04.580963   26437 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0115 09:51:04.580971   26437 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0115 09:51:04.580979   26437 command_runner.go:130] > # cni_default_network = ""
	I0115 09:51:04.580986   26437 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0115 09:51:04.580993   26437 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0115 09:51:04.580998   26437 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0115 09:51:04.581005   26437 command_runner.go:130] > # plugin_dirs = [
	I0115 09:51:04.581009   26437 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0115 09:51:04.581021   26437 command_runner.go:130] > # ]
	I0115 09:51:04.581032   26437 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0115 09:51:04.581036   26437 command_runner.go:130] > [crio.metrics]
	I0115 09:51:04.581043   26437 command_runner.go:130] > # Globally enable or disable metrics support.
	I0115 09:51:04.581049   26437 command_runner.go:130] > enable_metrics = true
	I0115 09:51:04.581056   26437 command_runner.go:130] > # Specify enabled metrics collectors.
	I0115 09:51:04.581065   26437 command_runner.go:130] > # Per default all metrics are enabled.
	I0115 09:51:04.581078   26437 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0115 09:51:04.581092   26437 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0115 09:51:04.581104   26437 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0115 09:51:04.581114   26437 command_runner.go:130] > # metrics_collectors = [
	I0115 09:51:04.581122   26437 command_runner.go:130] > # 	"operations",
	I0115 09:51:04.581132   26437 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0115 09:51:04.581143   26437 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0115 09:51:04.581152   26437 command_runner.go:130] > # 	"operations_errors",
	I0115 09:51:04.581162   26437 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0115 09:51:04.581172   26437 command_runner.go:130] > # 	"image_pulls_by_name",
	I0115 09:51:04.581182   26437 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0115 09:51:04.581193   26437 command_runner.go:130] > # 	"image_pulls_failures",
	I0115 09:51:04.581200   26437 command_runner.go:130] > # 	"image_pulls_successes",
	I0115 09:51:04.581204   26437 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0115 09:51:04.581211   26437 command_runner.go:130] > # 	"image_layer_reuse",
	I0115 09:51:04.581215   26437 command_runner.go:130] > # 	"containers_oom_total",
	I0115 09:51:04.581222   26437 command_runner.go:130] > # 	"containers_oom",
	I0115 09:51:04.581226   26437 command_runner.go:130] > # 	"processes_defunct",
	I0115 09:51:04.581232   26437 command_runner.go:130] > # 	"operations_total",
	I0115 09:51:04.581237   26437 command_runner.go:130] > # 	"operations_latency_seconds",
	I0115 09:51:04.581244   26437 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0115 09:51:04.581248   26437 command_runner.go:130] > # 	"operations_errors_total",
	I0115 09:51:04.581255   26437 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0115 09:51:04.581260   26437 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0115 09:51:04.581266   26437 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0115 09:51:04.581271   26437 command_runner.go:130] > # 	"image_pulls_success_total",
	I0115 09:51:04.581277   26437 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0115 09:51:04.581282   26437 command_runner.go:130] > # 	"containers_oom_count_total",
	I0115 09:51:04.581288   26437 command_runner.go:130] > # ]
	I0115 09:51:04.581293   26437 command_runner.go:130] > # The port on which the metrics server will listen.
	I0115 09:51:04.581299   26437 command_runner.go:130] > # metrics_port = 9090
	I0115 09:51:04.581305   26437 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0115 09:51:04.581311   26437 command_runner.go:130] > # metrics_socket = ""
	I0115 09:51:04.581316   26437 command_runner.go:130] > # The certificate for the secure metrics server.
	I0115 09:51:04.581324   26437 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0115 09:51:04.581337   26437 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0115 09:51:04.581348   26437 command_runner.go:130] > # certificate on any modification event.
	I0115 09:51:04.581358   26437 command_runner.go:130] > # metrics_cert = ""
	I0115 09:51:04.581369   26437 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0115 09:51:04.581380   26437 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0115 09:51:04.581390   26437 command_runner.go:130] > # metrics_key = ""
	I0115 09:51:04.581403   26437 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0115 09:51:04.581410   26437 command_runner.go:130] > [crio.tracing]
	I0115 09:51:04.581415   26437 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0115 09:51:04.581422   26437 command_runner.go:130] > # enable_tracing = false
	I0115 09:51:04.581427   26437 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0115 09:51:04.581434   26437 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0115 09:51:04.581439   26437 command_runner.go:130] > # Number of samples to collect per million spans.
	I0115 09:51:04.581445   26437 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0115 09:51:04.581452   26437 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0115 09:51:04.581458   26437 command_runner.go:130] > [crio.stats]
	I0115 09:51:04.581465   26437 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0115 09:51:04.581473   26437 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0115 09:51:04.581479   26437 command_runner.go:130] > # stats_collection_period = 0
	I0115 09:51:04.581706   26437 command_runner.go:130] ! time="2024-01-15 09:51:04.549140127Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0115 09:51:04.581728   26437 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0115 09:51:04.581856   26437 cni.go:84] Creating CNI manager for ""
	I0115 09:51:04.581873   26437 cni.go:136] 2 nodes found, recommending kindnet
	I0115 09:51:04.581885   26437 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 09:51:04.581912   26437 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-975382 NodeName:multinode-975382-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 09:51:04.582049   26437 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-975382-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 09:51:04.582099   26437 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-975382-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 09:51:04.582149   26437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 09:51:04.590670   26437 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0115 09:51:04.590710   26437 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0115 09:51:04.590756   26437 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0115 09:51:04.598774   26437 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0115 09:51:04.598798   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 09:51:04.598849   26437 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0115 09:51:04.598866   26437 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0115 09:51:04.598885   26437 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0115 09:51:04.602737   26437 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0115 09:51:04.602767   26437 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0115 09:51:04.602789   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0115 09:51:05.301181   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 09:51:05.301262   26437 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0115 09:51:05.306288   26437 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0115 09:51:05.306313   26437 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0115 09:51:05.306330   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0115 09:51:05.780444   26437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:51:05.793447   26437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 09:51:05.793553   26437 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0115 09:51:05.797396   26437 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0115 09:51:05.797765   26437 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0115 09:51:05.797804   26437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0115 09:51:06.266858   26437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0115 09:51:06.276279   26437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0115 09:51:06.291662   26437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 09:51:06.307347   26437 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0115 09:51:06.311031   26437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:51:06.322363   26437 host.go:66] Checking if "multinode-975382" exists ...
	I0115 09:51:06.322620   26437 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:51:06.322758   26437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:51:06.322798   26437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:51:06.336588   26437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0115 09:51:06.337015   26437 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:51:06.337464   26437 main.go:141] libmachine: Using API Version  1
	I0115 09:51:06.337487   26437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:51:06.337771   26437 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:51:06.337955   26437 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:51:06.338096   26437 start.go:304] JoinCluster: &{Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:51:06.338177   26437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0115 09:51:06.338192   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:51:06.340815   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:51:06.341230   26437 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:51:06.341259   26437 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:51:06.341378   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:51:06.341543   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:51:06.341706   26437 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:51:06.341857   26437 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 09:51:06.525771   26437 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token u86fz7.0zqx241pw8nj97uj --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 09:51:06.530366   26437 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 09:51:06.530403   26437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u86fz7.0zqx241pw8nj97uj --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-975382-m02"
	I0115 09:51:06.575830   26437 command_runner.go:130] > [preflight] Running pre-flight checks
	I0115 09:51:06.716298   26437 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0115 09:51:06.716336   26437 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0115 09:51:06.761655   26437 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:51:06.761680   26437 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:51:06.761686   26437 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0115 09:51:06.873507   26437 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0115 09:51:08.907286   26437 command_runner.go:130] > This node has joined the cluster:
	I0115 09:51:08.907316   26437 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0115 09:51:08.907326   26437 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0115 09:51:08.907336   26437 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0115 09:51:08.909171   26437 command_runner.go:130] ! W0115 09:51:06.552935     818 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0115 09:51:08.909195   26437 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:51:08.909389   26437 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u86fz7.0zqx241pw8nj97uj --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-975382-m02": (2.37897033s)
	I0115 09:51:08.909412   26437 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0115 09:51:09.137306   26437 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0115 09:51:09.137408   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=multinode-975382 minikube.k8s.io/updated_at=2024_01_15T09_51_09_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:51:09.239752   26437 command_runner.go:130] > node/multinode-975382-m02 labeled
	I0115 09:51:09.239788   26437 start.go:306] JoinCluster complete in 2.901693913s
	I0115 09:51:09.239801   26437 cni.go:84] Creating CNI manager for ""
	I0115 09:51:09.239808   26437 cni.go:136] 2 nodes found, recommending kindnet
	I0115 09:51:09.239861   26437 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 09:51:09.245648   26437 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0115 09:51:09.245673   26437 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0115 09:51:09.245684   26437 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0115 09:51:09.245698   26437 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:51:09.245711   26437 command_runner.go:130] > Access: 2024-01-15 09:49:44.546824380 +0000
	I0115 09:51:09.245724   26437 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0115 09:51:09.245736   26437 command_runner.go:130] > Change: 2024-01-15 09:49:42.733824380 +0000
	I0115 09:51:09.245745   26437 command_runner.go:130] >  Birth: -
	I0115 09:51:09.245817   26437 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 09:51:09.245834   26437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 09:51:09.262932   26437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 09:51:09.629233   26437 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0115 09:51:09.629267   26437 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0115 09:51:09.629277   26437 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0115 09:51:09.629285   26437 command_runner.go:130] > daemonset.apps/kindnet configured
	I0115 09:51:09.629809   26437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:51:09.630034   26437 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:51:09.630298   26437 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 09:51:09.630305   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:09.630312   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:09.630318   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:09.632334   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:09.632357   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:09.632367   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:09.632375   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:09.632383   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:09.632391   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:09.632400   26437 round_trippers.go:580]     Content-Length: 291
	I0115 09:51:09.632415   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:09 GMT
	I0115 09:51:09.632426   26437 round_trippers.go:580]     Audit-Id: e05d2468-a48c-4a28-8389-daa5f5b7edbc
	I0115 09:51:09.632454   26437 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9b737f2-ab4d-4b14-b6f0-b06c44cfcbb8","resourceVersion":"439","creationTimestamp":"2024-01-15T09:50:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0115 09:51:09.632556   26437 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-975382" context rescaled to 1 replicas
	I0115 09:51:09.632588   26437 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 09:51:09.634682   26437 out.go:177] * Verifying Kubernetes components...
	I0115 09:51:09.636092   26437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:51:09.667961   26437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:51:09.668271   26437 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:51:09.668511   26437 node_ready.go:35] waiting up to 6m0s for node "multinode-975382-m02" to be "Ready" ...
	I0115 09:51:09.668582   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:09.668590   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:09.668598   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:09.668604   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:09.671710   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:51:09.671734   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:09.671743   26437 round_trippers.go:580]     Audit-Id: ef274a14-5dc9-4136-b6b5-ceb18ea4c29c
	I0115 09:51:09.671751   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:09.671760   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:09.671769   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:09.671778   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:09.671786   26437 round_trippers.go:580]     Content-Length: 4082
	I0115 09:51:09.671794   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:09 GMT
	I0115 09:51:09.671878   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"494","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0115 09:51:10.169473   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:10.169495   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:10.169503   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:10.169510   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:10.172629   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:51:10.172653   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:10.172664   26437 round_trippers.go:580]     Audit-Id: e8df730c-6705-4c78-bf9e-1811a16dd33d
	I0115 09:51:10.172673   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:10.172681   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:10.172690   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:10.172702   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:10.172711   26437 round_trippers.go:580]     Content-Length: 4082
	I0115 09:51:10.172723   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:10 GMT
	I0115 09:51:10.172818   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"494","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0115 09:51:10.669366   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:10.669392   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:10.669400   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:10.669406   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:10.672218   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:10.672237   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:10.672244   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:10.672249   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:10.672254   26437 round_trippers.go:580]     Content-Length: 4082
	I0115 09:51:10.672260   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:10 GMT
	I0115 09:51:10.672265   26437 round_trippers.go:580]     Audit-Id: bfbe1e71-4822-46aa-8999-94b143d5e200
	I0115 09:51:10.672271   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:10.672275   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:10.672342   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"494","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0115 09:51:11.168882   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:11.168911   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:11.168921   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:11.168928   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:11.172665   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:51:11.172685   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:11.172695   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:11.172703   26437 round_trippers.go:580]     Content-Length: 4082
	I0115 09:51:11.172754   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:11 GMT
	I0115 09:51:11.172765   26437 round_trippers.go:580]     Audit-Id: dfd03a8d-afa5-4ba1-bb99-0f71982d5bbd
	I0115 09:51:11.172773   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:11.172798   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:11.172811   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:11.172901   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"494","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0115 09:51:11.669593   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:11.669626   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:11.669638   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:11.669648   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:11.672689   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:51:11.672754   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:11.672768   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:11.672776   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:11.672786   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:11.672798   26437 round_trippers.go:580]     Content-Length: 4082
	I0115 09:51:11.672811   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:11 GMT
	I0115 09:51:11.672823   26437 round_trippers.go:580]     Audit-Id: 87ddffd7-be4c-4e8a-9605-688b3536f895
	I0115 09:51:11.672836   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:11.672880   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"494","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0115 09:51:11.673121   26437 node_ready.go:58] node "multinode-975382-m02" has status "Ready":"False"
	I0115 09:51:12.169407   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:12.169434   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:12.169442   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:12.169449   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:12.172431   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:12.172456   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:12.172466   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:12 GMT
	I0115 09:51:12.172475   26437 round_trippers.go:580]     Audit-Id: a3e67962-4ab1-424f-aaad-5d0cb7b8691e
	I0115 09:51:12.172483   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:12.172490   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:12.172498   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:12.172506   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:12.172514   26437 round_trippers.go:580]     Content-Length: 4082
	I0115 09:51:12.172683   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"494","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0115 09:51:12.669350   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:12.669371   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:12.669379   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:12.669385   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:12.672063   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:12.672087   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:12.672097   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:12.672105   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:12 GMT
	I0115 09:51:12.672114   26437 round_trippers.go:580]     Audit-Id: 81202f35-d4cc-41d2-8220-57610d921e8e
	I0115 09:51:12.672123   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:12.672131   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:12.672139   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:12.672313   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:13.169523   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:13.169553   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:13.169565   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:13.169574   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:13.173505   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:51:13.173526   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:13.173533   26437 round_trippers.go:580]     Audit-Id: b775aed1-9390-4cce-ac06-dd79e90d29a4
	I0115 09:51:13.173538   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:13.173544   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:13.173549   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:13.173556   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:13.173564   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:13 GMT
	I0115 09:51:13.173769   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:13.669470   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:13.669496   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:13.669508   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:13.669517   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:13.672115   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:13.672141   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:13.672152   26437 round_trippers.go:580]     Audit-Id: 93785d87-f54d-4712-99c9-9751b3e036d3
	I0115 09:51:13.672159   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:13.672172   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:13.672179   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:13.672186   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:13.672193   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:13 GMT
	I0115 09:51:13.672629   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:14.169427   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:14.169450   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:14.169458   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:14.169464   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:14.172000   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:14.172023   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:14.172032   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:14.172037   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:14.172042   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:14.172048   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:14.172053   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:14 GMT
	I0115 09:51:14.172058   26437 round_trippers.go:580]     Audit-Id: fd5ef416-dcb2-4224-a41f-c795953e4e33
	I0115 09:51:14.172411   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:14.172710   26437 node_ready.go:58] node "multinode-975382-m02" has status "Ready":"False"
	I0115 09:51:14.669056   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:14.669077   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:14.669085   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:14.669090   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:14.672622   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:51:14.672652   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:14.672663   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:14.672672   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:14.672682   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:14.672691   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:14.672700   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:14 GMT
	I0115 09:51:14.672709   26437 round_trippers.go:580]     Audit-Id: 2dbd7347-23b2-4999-8cad-53dfd0c57378
	I0115 09:51:14.673026   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:15.168665   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:15.168691   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:15.168699   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:15.168705   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:15.171692   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:15.171714   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:15.171723   26437 round_trippers.go:580]     Audit-Id: acfd7511-19c9-4a73-8730-c7a03ed494d1
	I0115 09:51:15.171731   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:15.171738   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:15.171745   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:15.171753   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:15.171760   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:15 GMT
	I0115 09:51:15.171931   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:15.669650   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:15.669673   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:15.669681   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:15.669687   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:15.672363   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:15.672390   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:15.672400   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:15 GMT
	I0115 09:51:15.672408   26437 round_trippers.go:580]     Audit-Id: 96735ebc-67b2-4b66-9727-af167cf27b3e
	I0115 09:51:15.672416   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:15.672424   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:15.672433   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:15.672441   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:15.672614   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:16.169288   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:16.169310   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:16.169318   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:16.169324   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:16.172030   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:16.172052   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:16.172060   26437 round_trippers.go:580]     Audit-Id: 06f867d4-885f-41cb-964b-26f7079a6542
	I0115 09:51:16.172066   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:16.172071   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:16.172080   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:16.172088   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:16.172096   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:16 GMT
	I0115 09:51:16.172403   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:16.669313   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:16.669335   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:16.669350   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:16.669355   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:16.672781   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:51:16.672800   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:16.672807   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:16.672813   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:16.672818   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:16.672823   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:16 GMT
	I0115 09:51:16.672828   26437 round_trippers.go:580]     Audit-Id: 907e129b-07a8-4c4c-a58f-b582032f1af9
	I0115 09:51:16.672833   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:16.673203   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:16.673513   26437 node_ready.go:58] node "multinode-975382-m02" has status "Ready":"False"
	I0115 09:51:17.168864   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:17.168886   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.168894   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.168900   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.171557   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:17.171580   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.171591   26437 round_trippers.go:580]     Audit-Id: 2584b837-2474-4f28-9404-2e9affc574de
	I0115 09:51:17.171599   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.171605   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.171613   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.171621   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.171627   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.171886   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"500","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0115 09:51:17.669604   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:17.669633   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.669644   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.669653   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.672707   26437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:51:17.672725   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.672731   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.672737   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.672742   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.672752   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.672757   26437 round_trippers.go:580]     Audit-Id: f3564236-8d90-4755-86fe-54f613f707e4
	I0115 09:51:17.672763   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.673482   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"520","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I0115 09:51:17.673717   26437 node_ready.go:49] node "multinode-975382-m02" has status "Ready":"True"
	I0115 09:51:17.673731   26437 node_ready.go:38] duration metric: took 8.005206004s waiting for node "multinode-975382-m02" to be "Ready" ...
	I0115 09:51:17.673740   26437 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:51:17.673803   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 09:51:17.673812   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.673819   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.673824   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.678051   26437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 09:51:17.678072   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.678082   26437 round_trippers.go:580]     Audit-Id: d202f05b-5cd2-49d7-b8f0-fc72f1507ba6
	I0115 09:51:17.678088   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.678093   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.678098   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.678104   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.678115   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.679354   26437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"520"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"435","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67364 chars]
	I0115 09:51:17.681547   26437 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:17.681615   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 09:51:17.681624   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.681630   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.681637   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.683447   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:51:17.683462   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.683468   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.683474   26437 round_trippers.go:580]     Audit-Id: d9068476-c458-422b-94c6-f89d0a293b5d
	I0115 09:51:17.683482   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.683487   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.683492   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.683498   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.683666   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"435","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0115 09:51:17.684084   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:51:17.684098   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.684105   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.684111   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.689750   26437 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 09:51:17.689771   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.689781   26437 round_trippers.go:580]     Audit-Id: 0def96f6-fa2b-42c8-b68f-ed5910576deb
	I0115 09:51:17.689789   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.689797   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.689805   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.689814   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.689822   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.689926   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:51:17.690269   26437 pod_ready.go:92] pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace has status "Ready":"True"
	I0115 09:51:17.690284   26437 pod_ready.go:81] duration metric: took 8.715776ms waiting for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:17.690294   26437 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:17.690357   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-975382
	I0115 09:51:17.690364   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.690373   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.690380   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.693364   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:17.693377   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.693384   26437 round_trippers.go:580]     Audit-Id: db9c1f37-9228-41fd-b684-affde1060410
	I0115 09:51:17.693389   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.693395   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.693399   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.693404   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.693409   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.694077   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-975382","namespace":"kube-system","uid":"6b8601c3-a366-4171-9221-4b83d091aff7","resourceVersion":"441","creationTimestamp":"2024-01-15T09:50:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.mirror":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.seen":"2024-01-15T09:50:07.549379101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0115 09:51:17.694373   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:51:17.694384   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.694391   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.694396   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.696654   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:17.696673   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.696683   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.696691   26437 round_trippers.go:580]     Audit-Id: dfe73b90-60b5-4f09-9241-d103f7a5fd14
	I0115 09:51:17.696699   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.696707   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.696714   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.696723   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.697266   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:51:17.697547   26437 pod_ready.go:92] pod "etcd-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 09:51:17.697560   26437 pod_ready.go:81] duration metric: took 7.259057ms waiting for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:17.697571   26437 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:17.697611   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-975382
	I0115 09:51:17.697617   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.697624   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.697632   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.699536   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:51:17.699554   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.699563   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.699572   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.699580   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.699588   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.699600   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.699610   26437 round_trippers.go:580]     Audit-Id: 0f27a1fb-45e7-421a-b9ac-cead77150e73
	I0115 09:51:17.699746   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-975382","namespace":"kube-system","uid":"0c174d15-48a9-4394-ba76-207b7cbc42a0","resourceVersion":"334","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.mirror":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.seen":"2024-01-15T09:50:16.415736932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0115 09:51:17.700169   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:51:17.700184   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.700193   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.700199   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.701852   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:51:17.701870   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.701879   26437 round_trippers.go:580]     Audit-Id: 5975485c-f9ee-4d52-9990-d62a3dc8da1a
	I0115 09:51:17.701889   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.701897   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.701912   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.701923   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.701935   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.702089   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:51:17.702372   26437 pod_ready.go:92] pod "kube-apiserver-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 09:51:17.702384   26437 pod_ready.go:81] duration metric: took 4.805782ms waiting for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:17.702392   26437 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:17.702449   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-975382
	I0115 09:51:17.702458   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.702465   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.702472   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.704948   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:17.704966   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.705018   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.705034   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.705042   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.705051   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.705061   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.705070   26437 round_trippers.go:580]     Audit-Id: 005acf8b-632f-49a6-b54b-5a923a4b6d08
	I0115 09:51:17.705902   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-975382","namespace":"kube-system","uid":"0fabcc70-f923-40a7-86b4-70c0cc2213ce","resourceVersion":"335","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.mirror":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.seen":"2024-01-15T09:50:16.415738247Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0115 09:51:17.706335   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:51:17.706352   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.706361   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.706367   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.707967   26437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:51:17.707980   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.707986   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.707995   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.708002   26437 round_trippers.go:580]     Audit-Id: 3ccb771a-94bc-4635-b23e-cc6024b4c3fc
	I0115 09:51:17.708009   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.708016   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.708023   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.708259   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:51:17.708508   26437 pod_ready.go:92] pod "kube-controller-manager-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 09:51:17.708522   26437 pod_ready.go:81] duration metric: took 6.124073ms waiting for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:17.708530   26437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:17.869885   26437 request.go:629] Waited for 161.301211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 09:51:17.869946   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 09:51:17.869952   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:17.869962   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:17.869971   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:17.872542   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:17.872558   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:17.872565   26437 round_trippers.go:580]     Audit-Id: 71886106-5708-4f90-8873-0e7a04a44efe
	I0115 09:51:17.872570   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:17.872575   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:17.872581   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:17.872589   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:17.872597   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:17 GMT
	I0115 09:51:17.872913   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgsx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"a779cea9-5532-4d69-9e49-ac2879e028ec","resourceVersion":"408","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0115 09:51:18.070654   26437 request.go:629] Waited for 197.351327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:51:18.070705   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:51:18.070709   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:18.070718   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:18.070724   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:18.073293   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:18.073309   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:18.073315   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:18.073321   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:18.073326   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:18.073331   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:18.073336   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:18 GMT
	I0115 09:51:18.073340   26437 round_trippers.go:580]     Audit-Id: fdb420d9-ae44-4cb1-b41f-f247bad6935c
	I0115 09:51:18.073799   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:51:18.074101   26437 pod_ready.go:92] pod "kube-proxy-jgsx4" in "kube-system" namespace has status "Ready":"True"
	I0115 09:51:18.074117   26437 pod_ready.go:81] duration metric: took 365.582299ms waiting for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:18.074126   26437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:18.270306   26437 request.go:629] Waited for 196.108193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 09:51:18.270357   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 09:51:18.270362   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:18.270369   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:18.270375   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:18.273223   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:18.273240   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:18.273247   26437 round_trippers.go:580]     Audit-Id: eba27021-47ff-4a72-a765-76bf9f9aa323
	I0115 09:51:18.273252   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:18.273257   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:18.273262   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:18.273267   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:18.273272   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:18 GMT
	I0115 09:51:18.273545   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-znv78","generateName":"kube-proxy-","namespace":"kube-system","uid":"bb4d831f-7308-4f44-b944-fdfdf1d583c2","resourceVersion":"507","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0115 09:51:18.470332   26437 request.go:629] Waited for 196.376528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:18.470386   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 09:51:18.470390   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:18.470398   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:18.470403   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:18.473323   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:18.473347   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:18.473357   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:18 GMT
	I0115 09:51:18.473365   26437 round_trippers.go:580]     Audit-Id: b35e96a7-bf09-4002-99f3-56d7a98c296f
	I0115 09:51:18.473373   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:18.473381   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:18.473388   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:18.473395   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:18.473546   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"520","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_51_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I0115 09:51:18.473833   26437 pod_ready.go:92] pod "kube-proxy-znv78" in "kube-system" namespace has status "Ready":"True"
	I0115 09:51:18.473849   26437 pod_ready.go:81] duration metric: took 399.714948ms waiting for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:18.473860   26437 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:18.670012   26437 request.go:629] Waited for 196.089102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 09:51:18.670079   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 09:51:18.670084   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:18.670091   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:18.670097   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:18.672841   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:18.672861   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:18.672868   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:18.672873   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:18.672879   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:18 GMT
	I0115 09:51:18.672884   26437 round_trippers.go:580]     Audit-Id: 1cbaf0a8-11ad-484a-902d-49e2cf36dc05
	I0115 09:51:18.672889   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:18.672894   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:18.673496   26437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-975382","namespace":"kube-system","uid":"d7c93aee-4d7c-4264-8d65-de8781105178","resourceVersion":"440","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.mirror":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.seen":"2024-01-15T09:50:16.415739183Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0115 09:51:18.870035   26437 request.go:629] Waited for 196.153646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:51:18.870096   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 09:51:18.870102   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:18.870109   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:18.870115   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:18.872389   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:18.872413   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:18.872422   26437 round_trippers.go:580]     Audit-Id: ef04d75f-1f05-4fce-a5a6-bc65f38cdb52
	I0115 09:51:18.872428   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:18.872432   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:18.872443   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:18.872451   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:18.872459   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:18 GMT
	I0115 09:51:18.872669   26437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0115 09:51:18.872956   26437 pod_ready.go:92] pod "kube-scheduler-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 09:51:18.872970   26437 pod_ready.go:81] duration metric: took 399.103364ms waiting for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 09:51:18.872979   26437 pod_ready.go:38] duration metric: took 1.199231993s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:51:18.872991   26437 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 09:51:18.873032   26437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:51:18.888812   26437 system_svc.go:56] duration metric: took 15.813374ms WaitForService to wait for kubelet.
	I0115 09:51:18.888839   26437 kubeadm.go:581] duration metric: took 9.256228372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 09:51:18.888861   26437 node_conditions.go:102] verifying NodePressure condition ...
	I0115 09:51:19.070274   26437 request.go:629] Waited for 181.345071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0115 09:51:19.070340   26437 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0115 09:51:19.070349   26437 round_trippers.go:469] Request Headers:
	I0115 09:51:19.070359   26437 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:51:19.070371   26437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:51:19.073032   26437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:51:19.073055   26437 round_trippers.go:577] Response Headers:
	I0115 09:51:19.073065   26437 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:51:19.073072   26437 round_trippers.go:580]     Content-Type: application/json
	I0115 09:51:19.073080   26437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 09:51:19.073088   26437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 09:51:19.073100   26437 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:51:19 GMT
	I0115 09:51:19.073111   26437 round_trippers.go:580]     Audit-Id: 20b63e8e-5782-433f-8daf-f898199333a3
	I0115 09:51:19.073358   26437 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"418","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10197 chars]
	I0115 09:51:19.073768   26437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 09:51:19.073786   26437 node_conditions.go:123] node cpu capacity is 2
	I0115 09:51:19.073795   26437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 09:51:19.073801   26437 node_conditions.go:123] node cpu capacity is 2
	I0115 09:51:19.073807   26437 node_conditions.go:105] duration metric: took 184.940345ms to run NodePressure ...
	I0115 09:51:19.073817   26437 start.go:228] waiting for startup goroutines ...
	I0115 09:51:19.073841   26437 start.go:242] writing updated cluster config ...
	I0115 09:51:19.074095   26437 ssh_runner.go:195] Run: rm -f paused
	I0115 09:51:19.124077   26437 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 09:51:19.129578   26437 out.go:177] * Done! kubectl is now configured to use "multinode-975382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 09:49:43 UTC, ends at Mon 2024-01-15 09:51:25 UTC. --
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.782121713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a87c69ab-844b-4bd3-903d-0b43100f58f0 name=/runtime.v1.RuntimeService/Version
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.783672012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2f34b93f-3557-40c0-88dc-909649405768 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.784142349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705312285784127451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2f34b93f-3557-40c0-88dc-909649405768 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.784926897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d644d3fb-b73e-4032-8f5d-a206590b4175 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.784975854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d644d3fb-b73e-4032-8f5d-a206590b4175 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.785225066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad304c3d0a0450ccf5c4bba0d3895c405d01f84ee554b043cd6b723f6c986261,PodSandboxId:a5c78fe8b3d550cee8e314323b38063756d67c2498fea449dab1648a94e0a3ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705312281588066238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h2lk5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38f4390b-b4e4-467a-87f2-d4d4fc36cd18,},Annotations:map[string]string{io.kubernetes.container.hash: 7de5bf82,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41116677682cc0f877f2c1384f8e437c7cbbb139a6b3a8c1c30459d9086d5e73,PodSandboxId:02c19ca995179d23ba8faa909205422becb547817f47b99393cf45e3f27645a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705312234685038409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n2sqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f303a63a-c959-477e-89d5-c35bd0802b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c52efb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f611152180890f7a588978af09a08be5a312ab5a30dae03b5e821b30f2dccd,PodSandboxId:38c456df22909890e638ed31dcd93aa6fc8615c6afbda100d323cbfb23bd9917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705312234437427107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bd18b8a0385e90c4228bd2e2ec74bd43acc2a294dc1fa7dbdb54a4cea6b342,PodSandboxId:73d219715349d82588dbe72e65ddb9f149774414ecc9a7eb325b37c0fdf83b94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705312232004628749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7tf97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3b9e470b-af37-44cd-8402-6ec9b3340058,},Annotations:map[string]string{io.kubernetes.container.hash: b907eda5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26cd286c1b02642d61f359c171003d7b21ecac4415b408da28e6e8a39943ded,PodSandboxId:9384ce772c3bc0fb542534bde78537a5286e73e47b518a4b955d5c2d48d0dc6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705312229991036379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jgsx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779cea9-5532-4d69-9e49-ac2879
e028ec,},Annotations:map[string]string{io.kubernetes.container.hash: ad693185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a42395c73f6867e6d07c8193031aa0ddf4e32bbb32382441062163e9154370,PodSandboxId:87b23c8f6c3ec452a598354210c9add2132da280d65c0def3937f98416effb34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705312209306230789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61deabbad0762e4c988c95c1d9d34bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744cdc172b84cf54e2e22cb5c11ba5665a6cf8a97d27e510cfe3238f0e7f1d10,PodSandboxId:32a193e70b922e20a6db93b196d10170613487b349ce21955f460df2c30f1be2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705312208757174128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6b49eaacd277
48d82a7a1330e13424,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e218b531ed430d4ceaa06c77c3582eaa49e66ae254986ec4d90b8f7c5585648,PodSandboxId:90f5000ca3d36451233b9adda80c9e4a94f0295c3d44b1008de3d066aff89be2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705312208718805921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638704967c86b61fc474d50d41
1fc862,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7344b97ee3621557536327c7bd7983e6225b6bfd634ced09f8aed495a548314,PodSandboxId:4759fe1f15bdd04274b76d005a900ba046d00824f03c47f448d7830f9e8afa40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705312208498984197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb63d0e596a024d1a6f045abe90bff6,},Annotations:map[string]string{io.kubernetes
.container.hash: 7e22dc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d644d3fb-b73e-4032-8f5d-a206590b4175 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.789734292Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=b676b0ba-0ab6-40f0-8d2f-b1151163427e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.790076991Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a5c78fe8b3d550cee8e314323b38063756d67c2498fea449dab1648a94e0a3ac,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-h2lk5,Uid:38f4390b-b4e4-467a-87f2-d4d4fc36cd18,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705312280252591022,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-h2lk5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38f4390b-b4e4-467a-87f2-d4d4fc36cd18,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:51:19.908184614Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:38c456df22909890e638ed31dcd93aa6fc8615c6afbda100d323cbfb23bd9917,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b8eb636d-31de-4a7e-a296-a66493d5a827,Namespace:kube-system,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1705312234018893890,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/
tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-15T09:50:33.678765252Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:02c19ca995179d23ba8faa909205422becb547817f47b99393cf45e3f27645a8,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-n2sqg,Uid:f303a63a-c959-477e-89d5-c35bd0802b1b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705312234014514103,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-n2sqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f303a63a-c959-477e-89d5-c35bd0802b1b,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:50:33.670765987Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73d219715349d82588dbe72e65ddb9f149774414ecc9a7eb325b37c0fdf83b94,Metadata:&PodSandboxMetadata{Name:kindnet-7tf97,Uid:3b9e470b-af37-44cd-8402-6ec9b3340058,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1705312229022986467,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-7tf97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b9e470b-af37-44cd-8402-6ec9b3340058,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:50:28.089138620Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9384ce772c3bc0fb542534bde78537a5286e73e47b518a4b955d5c2d48d0dc6c,Metadata:&PodSandboxMetadata{Name:kube-proxy-jgsx4,Uid:a779cea9-5532-4d69-9e49-ac2879e028ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705312229001362417,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jgsx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779cea9-5532-4d69-9e49-ac2879e028ec,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T09:50:28.069495536Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:90f5000ca3d36451233b9adda80c9e4a94f0295c3d44b1008de3d066aff89be2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-975382,Uid:638704967c86b61fc474d50d411fc862,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705312208098079996,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638704967c86b61fc474d50d411fc862,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.217:8443,kubernetes.io/config.hash: 638704967c86b61fc474d50d411fc862,kubernetes.io/config.seen: 2024-01-15T09:50:07.549383162Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:87b23c8f6c3ec452a598354210c9add21
32da280d65c0def3937f98416effb34,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-975382,Uid:c61deabbad0762e4c988c95c1d9d34bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705312208093173074,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61deabbad0762e4c988c95c1d9d34bc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c61deabbad0762e4c988c95c1d9d34bc,kubernetes.io/config.seen: 2024-01-15T09:50:07.549385524Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32a193e70b922e20a6db93b196d10170613487b349ce21955f460df2c30f1be2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-975382,Uid:1a6b49eaacd27748d82a7a1330e13424,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705312208065736882,Labels:map[string]string{component: kube-controller-manager,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6b49eaacd27748d82a7a1330e13424,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1a6b49eaacd27748d82a7a1330e13424,kubernetes.io/config.seen: 2024-01-15T09:50:07.549384578Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4759fe1f15bdd04274b76d005a900ba046d00824f03c47f448d7830f9e8afa40,Metadata:&PodSandboxMetadata{Name:etcd-multinode-975382,Uid:2cb63d0e596a024d1a6f045abe90bff6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705312208020771681,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb63d0e596a024d1a6f045abe90bff6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.217:2379,kubern
etes.io/config.hash: 2cb63d0e596a024d1a6f045abe90bff6,kubernetes.io/config.seen: 2024-01-15T09:50:07.549379101Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=b676b0ba-0ab6-40f0-8d2f-b1151163427e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.791184784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f6360749-ac1b-49a7-a643-58cd2efa7ed6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.791257219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f6360749-ac1b-49a7-a643-58cd2efa7ed6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.791469007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad304c3d0a0450ccf5c4bba0d3895c405d01f84ee554b043cd6b723f6c986261,PodSandboxId:a5c78fe8b3d550cee8e314323b38063756d67c2498fea449dab1648a94e0a3ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705312281588066238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h2lk5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38f4390b-b4e4-467a-87f2-d4d4fc36cd18,},Annotations:map[string]string{io.kubernetes.container.hash: 7de5bf82,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41116677682cc0f877f2c1384f8e437c7cbbb139a6b3a8c1c30459d9086d5e73,PodSandboxId:02c19ca995179d23ba8faa909205422becb547817f47b99393cf45e3f27645a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705312234685038409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n2sqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f303a63a-c959-477e-89d5-c35bd0802b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c52efb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f611152180890f7a588978af09a08be5a312ab5a30dae03b5e821b30f2dccd,PodSandboxId:38c456df22909890e638ed31dcd93aa6fc8615c6afbda100d323cbfb23bd9917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705312234437427107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bd18b8a0385e90c4228bd2e2ec74bd43acc2a294dc1fa7dbdb54a4cea6b342,PodSandboxId:73d219715349d82588dbe72e65ddb9f149774414ecc9a7eb325b37c0fdf83b94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705312232004628749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7tf97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3b9e470b-af37-44cd-8402-6ec9b3340058,},Annotations:map[string]string{io.kubernetes.container.hash: b907eda5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26cd286c1b02642d61f359c171003d7b21ecac4415b408da28e6e8a39943ded,PodSandboxId:9384ce772c3bc0fb542534bde78537a5286e73e47b518a4b955d5c2d48d0dc6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705312229991036379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jgsx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779cea9-5532-4d69-9e49-ac2879
e028ec,},Annotations:map[string]string{io.kubernetes.container.hash: ad693185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a42395c73f6867e6d07c8193031aa0ddf4e32bbb32382441062163e9154370,PodSandboxId:87b23c8f6c3ec452a598354210c9add2132da280d65c0def3937f98416effb34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705312209306230789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61deabbad0762e4c988c95c1d9d34bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744cdc172b84cf54e2e22cb5c11ba5665a6cf8a97d27e510cfe3238f0e7f1d10,PodSandboxId:32a193e70b922e20a6db93b196d10170613487b349ce21955f460df2c30f1be2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705312208757174128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6b49eaacd277
48d82a7a1330e13424,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e218b531ed430d4ceaa06c77c3582eaa49e66ae254986ec4d90b8f7c5585648,PodSandboxId:90f5000ca3d36451233b9adda80c9e4a94f0295c3d44b1008de3d066aff89be2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705312208718805921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638704967c86b61fc474d50d41
1fc862,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7344b97ee3621557536327c7bd7983e6225b6bfd634ced09f8aed495a548314,PodSandboxId:4759fe1f15bdd04274b76d005a900ba046d00824f03c47f448d7830f9e8afa40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705312208498984197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb63d0e596a024d1a6f045abe90bff6,},Annotations:map[string]string{io.kubernetes
.container.hash: 7e22dc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f6360749-ac1b-49a7-a643-58cd2efa7ed6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.825461152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e6d83b4f-2023-4303-bf57-dbb9bfcd58ce name=/runtime.v1.RuntimeService/Version
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.825527290Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e6d83b4f-2023-4303-bf57-dbb9bfcd58ce name=/runtime.v1.RuntimeService/Version
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.826587037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0d350bb5-4815-4c7e-bfce-8fa25e11657d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.827086959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705312285827072549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0d350bb5-4815-4c7e-bfce-8fa25e11657d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.827898214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d4a0ff60-1650-4b8d-bc3b-6c65188daebc name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.828016190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d4a0ff60-1650-4b8d-bc3b-6c65188daebc name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.828243042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad304c3d0a0450ccf5c4bba0d3895c405d01f84ee554b043cd6b723f6c986261,PodSandboxId:a5c78fe8b3d550cee8e314323b38063756d67c2498fea449dab1648a94e0a3ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705312281588066238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h2lk5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38f4390b-b4e4-467a-87f2-d4d4fc36cd18,},Annotations:map[string]string{io.kubernetes.container.hash: 7de5bf82,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41116677682cc0f877f2c1384f8e437c7cbbb139a6b3a8c1c30459d9086d5e73,PodSandboxId:02c19ca995179d23ba8faa909205422becb547817f47b99393cf45e3f27645a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705312234685038409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n2sqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f303a63a-c959-477e-89d5-c35bd0802b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c52efb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f611152180890f7a588978af09a08be5a312ab5a30dae03b5e821b30f2dccd,PodSandboxId:38c456df22909890e638ed31dcd93aa6fc8615c6afbda100d323cbfb23bd9917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705312234437427107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bd18b8a0385e90c4228bd2e2ec74bd43acc2a294dc1fa7dbdb54a4cea6b342,PodSandboxId:73d219715349d82588dbe72e65ddb9f149774414ecc9a7eb325b37c0fdf83b94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705312232004628749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7tf97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3b9e470b-af37-44cd-8402-6ec9b3340058,},Annotations:map[string]string{io.kubernetes.container.hash: b907eda5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26cd286c1b02642d61f359c171003d7b21ecac4415b408da28e6e8a39943ded,PodSandboxId:9384ce772c3bc0fb542534bde78537a5286e73e47b518a4b955d5c2d48d0dc6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705312229991036379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jgsx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779cea9-5532-4d69-9e49-ac2879
e028ec,},Annotations:map[string]string{io.kubernetes.container.hash: ad693185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a42395c73f6867e6d07c8193031aa0ddf4e32bbb32382441062163e9154370,PodSandboxId:87b23c8f6c3ec452a598354210c9add2132da280d65c0def3937f98416effb34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705312209306230789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61deabbad0762e4c988c95c1d9d34bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744cdc172b84cf54e2e22cb5c11ba5665a6cf8a97d27e510cfe3238f0e7f1d10,PodSandboxId:32a193e70b922e20a6db93b196d10170613487b349ce21955f460df2c30f1be2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705312208757174128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6b49eaacd277
48d82a7a1330e13424,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e218b531ed430d4ceaa06c77c3582eaa49e66ae254986ec4d90b8f7c5585648,PodSandboxId:90f5000ca3d36451233b9adda80c9e4a94f0295c3d44b1008de3d066aff89be2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705312208718805921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638704967c86b61fc474d50d41
1fc862,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7344b97ee3621557536327c7bd7983e6225b6bfd634ced09f8aed495a548314,PodSandboxId:4759fe1f15bdd04274b76d005a900ba046d00824f03c47f448d7830f9e8afa40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705312208498984197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb63d0e596a024d1a6f045abe90bff6,},Annotations:map[string]string{io.kubernetes
.container.hash: 7e22dc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d4a0ff60-1650-4b8d-bc3b-6c65188daebc name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.872564687Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=848bca2e-7d4e-475b-bccd-95a0d5d1e366 name=/runtime.v1.RuntimeService/Version
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.872660425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=848bca2e-7d4e-475b-bccd-95a0d5d1e366 name=/runtime.v1.RuntimeService/Version
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.874033643Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=14c77f26-b7cc-44e8-b79e-074ffcc00367 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.874426749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705312285874413510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=14c77f26-b7cc-44e8-b79e-074ffcc00367 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.874936633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=17177193-e7fb-4f26-9476-894dcb1a3dd0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.875003013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=17177193-e7fb-4f26-9476-894dcb1a3dd0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 09:51:25 multinode-975382 crio[716]: time="2024-01-15 09:51:25.875216920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad304c3d0a0450ccf5c4bba0d3895c405d01f84ee554b043cd6b723f6c986261,PodSandboxId:a5c78fe8b3d550cee8e314323b38063756d67c2498fea449dab1648a94e0a3ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705312281588066238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h2lk5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38f4390b-b4e4-467a-87f2-d4d4fc36cd18,},Annotations:map[string]string{io.kubernetes.container.hash: 7de5bf82,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41116677682cc0f877f2c1384f8e437c7cbbb139a6b3a8c1c30459d9086d5e73,PodSandboxId:02c19ca995179d23ba8faa909205422becb547817f47b99393cf45e3f27645a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705312234685038409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n2sqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f303a63a-c959-477e-89d5-c35bd0802b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c52efb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f611152180890f7a588978af09a08be5a312ab5a30dae03b5e821b30f2dccd,PodSandboxId:38c456df22909890e638ed31dcd93aa6fc8615c6afbda100d323cbfb23bd9917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705312234437427107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bd18b8a0385e90c4228bd2e2ec74bd43acc2a294dc1fa7dbdb54a4cea6b342,PodSandboxId:73d219715349d82588dbe72e65ddb9f149774414ecc9a7eb325b37c0fdf83b94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705312232004628749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7tf97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3b9e470b-af37-44cd-8402-6ec9b3340058,},Annotations:map[string]string{io.kubernetes.container.hash: b907eda5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26cd286c1b02642d61f359c171003d7b21ecac4415b408da28e6e8a39943ded,PodSandboxId:9384ce772c3bc0fb542534bde78537a5286e73e47b518a4b955d5c2d48d0dc6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705312229991036379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jgsx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779cea9-5532-4d69-9e49-ac2879
e028ec,},Annotations:map[string]string{io.kubernetes.container.hash: ad693185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a42395c73f6867e6d07c8193031aa0ddf4e32bbb32382441062163e9154370,PodSandboxId:87b23c8f6c3ec452a598354210c9add2132da280d65c0def3937f98416effb34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705312209306230789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61deabbad0762e4c988c95c1d9d34bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744cdc172b84cf54e2e22cb5c11ba5665a6cf8a97d27e510cfe3238f0e7f1d10,PodSandboxId:32a193e70b922e20a6db93b196d10170613487b349ce21955f460df2c30f1be2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705312208757174128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6b49eaacd277
48d82a7a1330e13424,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e218b531ed430d4ceaa06c77c3582eaa49e66ae254986ec4d90b8f7c5585648,PodSandboxId:90f5000ca3d36451233b9adda80c9e4a94f0295c3d44b1008de3d066aff89be2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705312208718805921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638704967c86b61fc474d50d41
1fc862,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7344b97ee3621557536327c7bd7983e6225b6bfd634ced09f8aed495a548314,PodSandboxId:4759fe1f15bdd04274b76d005a900ba046d00824f03c47f448d7830f9e8afa40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705312208498984197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb63d0e596a024d1a6f045abe90bff6,},Annotations:map[string]string{io.kubernetes
.container.hash: 7e22dc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=17177193-e7fb-4f26-9476-894dcb1a3dd0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ad304c3d0a045       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   a5c78fe8b3d55       busybox-5bc68d56bd-h2lk5
	41116677682cc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      51 seconds ago       Running             coredns                   0                   02c19ca995179       coredns-5dd5756b68-n2sqg
	e2f6111521808       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      51 seconds ago       Running             storage-provisioner       0                   38c456df22909       storage-provisioner
	65bd18b8a0385       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      53 seconds ago       Running             kindnet-cni               0                   73d219715349d       kindnet-7tf97
	f26cd286c1b02       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      55 seconds ago       Running             kube-proxy                0                   9384ce772c3bc       kube-proxy-jgsx4
	f2a42395c73f6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   87b23c8f6c3ec       kube-scheduler-multinode-975382
	744cdc172b84c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   32a193e70b922       kube-controller-manager-multinode-975382
	8e218b531ed43       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   90f5000ca3d36       kube-apiserver-multinode-975382
	a7344b97ee362       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   4759fe1f15bdd       etcd-multinode-975382
	
	
	==> coredns [41116677682cc0f877f2c1384f8e437c7cbbb139a6b3a8c1c30459d9086d5e73] <==
	[INFO] 10.244.0.3:59368 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100165s
	[INFO] 10.244.1.2:44333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177025s
	[INFO] 10.244.1.2:40381 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001704103s
	[INFO] 10.244.1.2:47445 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132126s
	[INFO] 10.244.1.2:52815 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136685s
	[INFO] 10.244.1.2:40419 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001439892s
	[INFO] 10.244.1.2:44748 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098631s
	[INFO] 10.244.1.2:60119 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144975s
	[INFO] 10.244.1.2:52732 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121822s
	[INFO] 10.244.0.3:37451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092643s
	[INFO] 10.244.0.3:52159 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091309s
	[INFO] 10.244.0.3:52289 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000052922s
	[INFO] 10.244.0.3:36139 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103084s
	[INFO] 10.244.1.2:53728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176356s
	[INFO] 10.244.1.2:55175 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088795s
	[INFO] 10.244.1.2:48657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126402s
	[INFO] 10.244.1.2:36033 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121172s
	[INFO] 10.244.0.3:42420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111369s
	[INFO] 10.244.0.3:43178 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198937s
	[INFO] 10.244.0.3:49242 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110458s
	[INFO] 10.244.0.3:36854 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019711s
	[INFO] 10.244.1.2:40497 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119837s
	[INFO] 10.244.1.2:43094 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084104s
	[INFO] 10.244.1.2:37345 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073576s
	[INFO] 10.244.1.2:51423 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074661s
	
	
	==> describe nodes <==
	Name:               multinode-975382
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-975382
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=multinode-975382
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T09_50_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 09:50:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-975382
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 09:51:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 09:50:33 +0000   Mon, 15 Jan 2024 09:50:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 09:50:33 +0000   Mon, 15 Jan 2024 09:50:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 09:50:33 +0000   Mon, 15 Jan 2024 09:50:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 09:50:33 +0000   Mon, 15 Jan 2024 09:50:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    multinode-975382
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa52c9a1c9b14ad8aa1f708bd3b23c5b
	  System UUID:                aa52c9a1-c9b1-4ad8-aa1f-708bd3b23c5b
	  Boot ID:                    6862baa7-833d-4547-9211-85bbc4a40310
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-h2lk5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-n2sqg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-multinode-975382                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kindnet-7tf97                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      58s
	  kube-system                 kube-apiserver-multinode-975382             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-multinode-975382    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-proxy-jgsx4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-multinode-975382             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node multinode-975382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node multinode-975382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node multinode-975382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node multinode-975382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node multinode-975382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s                kubelet          Node multinode-975382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           59s                node-controller  Node multinode-975382 event: Registered Node multinode-975382 in Controller
	  Normal  NodeReady                53s                kubelet          Node multinode-975382 status is now: NodeReady
	
	
	Name:               multinode-975382-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-975382-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=multinode-975382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T09_51_09_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 09:51:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-975382-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 09:51:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 09:51:17 +0000   Mon, 15 Jan 2024 09:51:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 09:51:17 +0000   Mon, 15 Jan 2024 09:51:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 09:51:17 +0000   Mon, 15 Jan 2024 09:51:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 09:51:17 +0000   Mon, 15 Jan 2024 09:51:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    multinode-975382-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ad3b9e7541a43f5bb4662152fcf04c7
	  System UUID:                4ad3b9e7-541a-43f5-bb46-62152fcf04c7
	  Boot ID:                    218cbe2d-977f-4264-b8ee-4b4a0d915cea
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pwx96    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-pd2q7               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18s
	  kube-system                 kube-proxy-znv78            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  NodeHasSufficientMemory  18s (x5 over 19s)  kubelet          Node multinode-975382-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x5 over 19s)  kubelet          Node multinode-975382-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x5 over 19s)  kubelet          Node multinode-975382-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node multinode-975382-m02 event: Registered Node multinode-975382-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-975382-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan15 09:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066621] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.329576] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.332919] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139104] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000007] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.044930] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.094032] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.114913] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.150016] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.110758] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.214979] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[Jan15 09:50] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +9.315135] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[ +19.644501] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [a7344b97ee3621557536327c7bd7983e6225b6bfd634ced09f8aed495a548314] <==
	{"level":"info","ts":"2024-01-15T09:50:10.485344Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-15T09:50:10.484739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(11573293933243462141)"}
	{"level":"info","ts":"2024-01-15T09:50:10.490049Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","added-peer-id":"a09c9983ac28f1fd","added-peer-peer-urls":["https://192.168.39.217:2380"]}
	{"level":"info","ts":"2024-01-15T09:50:10.484897Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-01-15T09:50:10.49303Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-01-15T09:50:10.493927Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a09c9983ac28f1fd","initial-advertise-peer-urls":["https://192.168.39.217:2380"],"listen-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.217:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-15T09:50:10.494074Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-15T09:50:10.897941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-15T09:50:10.898134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-15T09:50:10.898267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 1"}
	{"level":"info","ts":"2024-01-15T09:50:10.89838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became candidate at term 2"}
	{"level":"info","ts":"2024-01-15T09:50:10.898412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgVoteResp from a09c9983ac28f1fd at term 2"}
	{"level":"info","ts":"2024-01-15T09:50:10.898517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became leader at term 2"}
	{"level":"info","ts":"2024-01-15T09:50:10.898547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a09c9983ac28f1fd elected leader a09c9983ac28f1fd at term 2"}
	{"level":"info","ts":"2024-01-15T09:50:10.904073Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T09:50:10.908199Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a09c9983ac28f1fd","local-member-attributes":"{Name:multinode-975382 ClientURLs:[https://192.168.39.217:2379]}","request-path":"/0/members/a09c9983ac28f1fd/attributes","cluster-id":"8f39477865362797","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T09:50:10.908261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T09:50:10.909038Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T09:50:10.909148Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T09:50:10.909169Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T09:50:10.909704Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.217:2379"}
	{"level":"info","ts":"2024-01-15T09:50:10.910999Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T09:50:10.915484Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T09:50:10.918927Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T09:50:10.919048Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:51:26 up 1 min,  0 users,  load average: 0.43, 0.17, 0.06
	Linux multinode-975382 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [65bd18b8a0385e90c4228bd2e2ec74bd43acc2a294dc1fa7dbdb54a4cea6b342] <==
	I0115 09:50:32.755920       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0115 09:50:32.756076       1 main.go:107] hostIP = 192.168.39.217
	podIP = 192.168.39.217
	I0115 09:50:32.756369       1 main.go:116] setting mtu 1500 for CNI 
	I0115 09:50:32.756408       1 main.go:146] kindnetd IP family: "ipv4"
	I0115 09:50:32.756442       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0115 09:50:33.349188       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 09:50:33.349289       1 main.go:227] handling current node
	I0115 09:50:43.369745       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 09:50:43.369939       1 main.go:227] handling current node
	I0115 09:50:53.374181       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 09:50:53.374283       1 main.go:227] handling current node
	I0115 09:51:03.378259       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 09:51:03.378392       1 main.go:227] handling current node
	I0115 09:51:13.386007       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 09:51:13.386156       1 main.go:227] handling current node
	I0115 09:51:13.386188       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0115 09:51:13.386208       1 main.go:250] Node multinode-975382-m02 has CIDR [10.244.1.0/24] 
	I0115 09:51:13.386422       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.95 Flags: [] Table: 0} 
	I0115 09:51:23.400774       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 09:51:23.400917       1 main.go:227] handling current node
	I0115 09:51:23.400950       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0115 09:51:23.400970       1 main.go:250] Node multinode-975382-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8e218b531ed430d4ceaa06c77c3582eaa49e66ae254986ec4d90b8f7c5585648] <==
	I0115 09:50:12.617160       1 controller.go:624] quota admission added evaluator for: namespaces
	I0115 09:50:12.619792       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0115 09:50:12.620115       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0115 09:50:12.622475       1 shared_informer.go:318] Caches are synced for configmaps
	I0115 09:50:12.639268       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0115 09:50:13.523410       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0115 09:50:13.529943       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0115 09:50:13.529980       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0115 09:50:14.207068       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0115 09:50:14.272085       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0115 09:50:14.369565       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0115 09:50:14.378361       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0115 09:50:14.379403       1 controller.go:624] quota admission added evaluator for: endpoints
	I0115 09:50:14.383944       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0115 09:50:14.585060       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	E0115 09:50:16.203590       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0115 09:50:16.203635       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0115 09:50:16.203650       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 4.751µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0115 09:50:16.205274       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0115 09:50:16.205366       1 timeout.go:142] post-timeout activity - time-elapsed: 1.814806ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	I0115 09:50:16.252589       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0115 09:50:16.266609       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0115 09:50:16.283632       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0115 09:50:28.040955       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0115 09:50:28.135254       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [744cdc172b84cf54e2e22cb5c11ba5665a6cf8a97d27e510cfe3238f0e7f1d10] <==
	I0115 09:50:28.831318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.318122ms"
	I0115 09:50:28.831464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.174µs"
	I0115 09:50:33.673388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="252.362µs"
	I0115 09:50:33.708931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="282.444µs"
	I0115 09:50:35.671034       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.675404ms"
	I0115 09:50:35.674573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.918µs"
	I0115 09:50:37.533115       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0115 09:51:08.852815       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-975382-m02\" does not exist"
	I0115 09:51:08.867046       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-975382-m02" podCIDRs=["10.244.1.0/24"]
	I0115 09:51:08.880742       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-znv78"
	I0115 09:51:08.880803       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pd2q7"
	I0115 09:51:12.539738       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-975382-m02"
	I0115 09:51:12.539947       1 event.go:307] "Event occurred" object="multinode-975382-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-975382-m02 event: Registered Node multinode-975382-m02 in Controller"
	I0115 09:51:17.651098       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-975382-m02"
	I0115 09:51:19.849441       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0115 09:51:19.867414       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-pwx96"
	I0115 09:51:19.893363       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-h2lk5"
	I0115 09:51:19.928754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="80.526483ms"
	I0115 09:51:19.963332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.383583ms"
	I0115 09:51:19.965319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="379.209µs"
	I0115 09:51:21.799312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.036229ms"
	I0115 09:51:21.799502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="94.543µs"
	I0115 09:51:22.499357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="14.012985ms"
	I0115 09:51:22.499553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.166µs"
	I0115 09:51:22.552594       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-pwx96" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-pwx96"
	
	
	==> kube-proxy [f26cd286c1b02642d61f359c171003d7b21ecac4415b408da28e6e8a39943ded] <==
	I0115 09:50:30.217460       1 server_others.go:69] "Using iptables proxy"
	I0115 09:50:30.232817       1 node.go:141] Successfully retrieved node IP: 192.168.39.217
	I0115 09:50:30.283241       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 09:50:30.283288       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 09:50:30.285928       1 server_others.go:152] "Using iptables Proxier"
	I0115 09:50:30.286219       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 09:50:30.286818       1 server.go:846] "Version info" version="v1.28.4"
	I0115 09:50:30.286932       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 09:50:30.289206       1 config.go:188] "Starting service config controller"
	I0115 09:50:30.289585       1 config.go:97] "Starting endpoint slice config controller"
	I0115 09:50:30.289713       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 09:50:30.289769       1 config.go:315] "Starting node config controller"
	I0115 09:50:30.289798       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 09:50:30.290484       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 09:50:30.390368       1 shared_informer.go:318] Caches are synced for node config
	I0115 09:50:30.390419       1 shared_informer.go:318] Caches are synced for service config
	I0115 09:50:30.391589       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f2a42395c73f6867e6d07c8193031aa0ddf4e32bbb32382441062163e9154370] <==
	W0115 09:50:13.430409       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 09:50:13.430435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0115 09:50:13.558243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 09:50:13.558295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 09:50:13.593097       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 09:50:13.593176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 09:50:13.678824       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0115 09:50:13.678930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0115 09:50:13.704303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 09:50:13.704376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0115 09:50:13.767529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 09:50:13.767581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 09:50:13.770039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 09:50:13.770086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 09:50:13.813376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 09:50:13.813811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 09:50:13.830232       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 09:50:13.830284       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 09:50:13.902754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 09:50:13.902807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 09:50:13.910237       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 09:50:13.910262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0115 09:50:13.999097       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 09:50:13.999192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0115 09:50:15.510905       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 09:49:43 UTC, ends at Mon 2024-01-15 09:51:26 UTC. --
	Jan 15 09:50:28 multinode-975382 kubelet[1261]: I0115 09:50:28.116797    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b9e470b-af37-44cd-8402-6ec9b3340058-lib-modules\") pod \"kindnet-7tf97\" (UID: \"3b9e470b-af37-44cd-8402-6ec9b3340058\") " pod="kube-system/kindnet-7tf97"
	Jan 15 09:50:28 multinode-975382 kubelet[1261]: I0115 09:50:28.116876    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a779cea9-5532-4d69-9e49-ac2879e028ec-kube-proxy\") pod \"kube-proxy-jgsx4\" (UID: \"a779cea9-5532-4d69-9e49-ac2879e028ec\") " pod="kube-system/kube-proxy-jgsx4"
	Jan 15 09:50:28 multinode-975382 kubelet[1261]: E0115 09:50:28.255466    1261 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 15 09:50:28 multinode-975382 kubelet[1261]: E0115 09:50:28.255494    1261 projected.go:198] Error preparing data for projected volume kube-api-access-52rws for pod kube-system/kindnet-7tf97: configmap "kube-root-ca.crt" not found
	Jan 15 09:50:28 multinode-975382 kubelet[1261]: E0115 09:50:28.255594    1261 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b9e470b-af37-44cd-8402-6ec9b3340058-kube-api-access-52rws podName:3b9e470b-af37-44cd-8402-6ec9b3340058 nodeName:}" failed. No retries permitted until 2024-01-15 09:50:28.75553421 +0000 UTC m=+12.511803388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-52rws" (UniqueName: "kubernetes.io/projected/3b9e470b-af37-44cd-8402-6ec9b3340058-kube-api-access-52rws") pod "kindnet-7tf97" (UID: "3b9e470b-af37-44cd-8402-6ec9b3340058") : configmap "kube-root-ca.crt" not found
	Jan 15 09:50:28 multinode-975382 kubelet[1261]: E0115 09:50:28.255896    1261 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 15 09:50:28 multinode-975382 kubelet[1261]: E0115 09:50:28.255911    1261 projected.go:198] Error preparing data for projected volume kube-api-access-fqwqd for pod kube-system/kube-proxy-jgsx4: configmap "kube-root-ca.crt" not found
	Jan 15 09:50:28 multinode-975382 kubelet[1261]: E0115 09:50:28.255949    1261 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a779cea9-5532-4d69-9e49-ac2879e028ec-kube-api-access-fqwqd podName:a779cea9-5532-4d69-9e49-ac2879e028ec nodeName:}" failed. No retries permitted until 2024-01-15 09:50:28.755937729 +0000 UTC m=+12.512206907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fqwqd" (UniqueName: "kubernetes.io/projected/a779cea9-5532-4d69-9e49-ac2879e028ec-kube-api-access-fqwqd") pod "kube-proxy-jgsx4" (UID: "a779cea9-5532-4d69-9e49-ac2879e028ec") : configmap "kube-root-ca.crt" not found
	Jan 15 09:50:32 multinode-975382 kubelet[1261]: I0115 09:50:32.622403    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jgsx4" podStartSLOduration=4.62235436 podCreationTimestamp="2024-01-15 09:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 09:50:30.614673762 +0000 UTC m=+14.370942945" watchObservedRunningTime="2024-01-15 09:50:32.62235436 +0000 UTC m=+16.378623545"
	Jan 15 09:50:33 multinode-975382 kubelet[1261]: I0115 09:50:33.637523    1261 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 15 09:50:33 multinode-975382 kubelet[1261]: I0115 09:50:33.670699    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-7tf97" podStartSLOduration=5.6706604689999995 podCreationTimestamp="2024-01-15 09:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 09:50:32.622715238 +0000 UTC m=+16.378984424" watchObservedRunningTime="2024-01-15 09:50:33.670660469 +0000 UTC m=+17.426929654"
	Jan 15 09:50:33 multinode-975382 kubelet[1261]: I0115 09:50:33.670985    1261 topology_manager.go:215] "Topology Admit Handler" podUID="f303a63a-c959-477e-89d5-c35bd0802b1b" podNamespace="kube-system" podName="coredns-5dd5756b68-n2sqg"
	Jan 15 09:50:33 multinode-975382 kubelet[1261]: I0115 09:50:33.678942    1261 topology_manager.go:215] "Topology Admit Handler" podUID="b8eb636d-31de-4a7e-a296-a66493d5a827" podNamespace="kube-system" podName="storage-provisioner"
	Jan 15 09:50:33 multinode-975382 kubelet[1261]: I0115 09:50:33.756048    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f303a63a-c959-477e-89d5-c35bd0802b1b-config-volume\") pod \"coredns-5dd5756b68-n2sqg\" (UID: \"f303a63a-c959-477e-89d5-c35bd0802b1b\") " pod="kube-system/coredns-5dd5756b68-n2sqg"
	Jan 15 09:50:33 multinode-975382 kubelet[1261]: I0115 09:50:33.756128    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b8eb636d-31de-4a7e-a296-a66493d5a827-tmp\") pod \"storage-provisioner\" (UID: \"b8eb636d-31de-4a7e-a296-a66493d5a827\") " pod="kube-system/storage-provisioner"
	Jan 15 09:50:33 multinode-975382 kubelet[1261]: I0115 09:50:33.756155    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26rl2\" (UniqueName: \"kubernetes.io/projected/f303a63a-c959-477e-89d5-c35bd0802b1b-kube-api-access-26rl2\") pod \"coredns-5dd5756b68-n2sqg\" (UID: \"f303a63a-c959-477e-89d5-c35bd0802b1b\") " pod="kube-system/coredns-5dd5756b68-n2sqg"
	Jan 15 09:50:33 multinode-975382 kubelet[1261]: I0115 09:50:33.756183    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2xjr\" (UniqueName: \"kubernetes.io/projected/b8eb636d-31de-4a7e-a296-a66493d5a827-kube-api-access-c2xjr\") pod \"storage-provisioner\" (UID: \"b8eb636d-31de-4a7e-a296-a66493d5a827\") " pod="kube-system/storage-provisioner"
	Jan 15 09:50:35 multinode-975382 kubelet[1261]: I0115 09:50:35.655665    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.655626367 podCreationTimestamp="2024-01-15 09:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 09:50:35.640670563 +0000 UTC m=+19.396939749" watchObservedRunningTime="2024-01-15 09:50:35.655626367 +0000 UTC m=+19.411895553"
	Jan 15 09:50:36 multinode-975382 kubelet[1261]: I0115 09:50:36.502093    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-n2sqg" podStartSLOduration=8.502055572 podCreationTimestamp="2024-01-15 09:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 09:50:35.656459203 +0000 UTC m=+19.412728389" watchObservedRunningTime="2024-01-15 09:50:36.502055572 +0000 UTC m=+20.258324756"
	Jan 15 09:51:16 multinode-975382 kubelet[1261]: E0115 09:51:16.597247    1261 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 09:51:16 multinode-975382 kubelet[1261]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 09:51:16 multinode-975382 kubelet[1261]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 09:51:16 multinode-975382 kubelet[1261]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 09:51:19 multinode-975382 kubelet[1261]: I0115 09:51:19.908419    1261 topology_manager.go:215] "Topology Admit Handler" podUID="38f4390b-b4e4-467a-87f2-d4d4fc36cd18" podNamespace="default" podName="busybox-5bc68d56bd-h2lk5"
	Jan 15 09:51:19 multinode-975382 kubelet[1261]: I0115 09:51:19.918226    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jkn9\" (UniqueName: \"kubernetes.io/projected/38f4390b-b4e4-467a-87f2-d4d4fc36cd18-kube-api-access-9jkn9\") pod \"busybox-5bc68d56bd-h2lk5\" (UID: \"38f4390b-b4e4-467a-87f2-d4d4fc36cd18\") " pod="default/busybox-5bc68d56bd-h2lk5"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-975382 -n multinode-975382
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-975382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (689.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-975382
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-975382
E0115 09:54:12.883635   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:54:21.452675   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-975382: exit status 82 (2m1.658742429s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-975382"  ...
	* Stopping node "multinode-975382"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-975382" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-975382 --wait=true -v=8 --alsologtostderr
E0115 09:55:44.501380   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:56:39.520290   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:59:12.883695   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:59:21.453268   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 10:00:35.931779   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:01:39.519713   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 10:03:02.567694   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 10:04:12.883464   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-975382 --wait=true -v=8 --alsologtostderr: (9m25.424033007s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-975382
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-975382 -n multinode-975382
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-975382 logs -n 25: (1.578666123s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-975382 ssh -n                                                                 | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-975382 cp multinode-975382-m02:/home/docker/cp-test.txt                       | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1127644128/001/cp-test_multinode-975382-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n                                                                 | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-975382 cp multinode-975382-m02:/home/docker/cp-test.txt                       | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382:/home/docker/cp-test_multinode-975382-m02_multinode-975382.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n                                                                 | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n multinode-975382 sudo cat                                       | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | /home/docker/cp-test_multinode-975382-m02_multinode-975382.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-975382 cp multinode-975382-m02:/home/docker/cp-test.txt                       | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m03:/home/docker/cp-test_multinode-975382-m02_multinode-975382-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n                                                                 | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n multinode-975382-m03 sudo cat                                   | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | /home/docker/cp-test_multinode-975382-m02_multinode-975382-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-975382 cp testdata/cp-test.txt                                                | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n                                                                 | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-975382 cp multinode-975382-m03:/home/docker/cp-test.txt                       | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1127644128/001/cp-test_multinode-975382-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n                                                                 | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-975382 cp multinode-975382-m03:/home/docker/cp-test.txt                       | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382:/home/docker/cp-test_multinode-975382-m03_multinode-975382.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n                                                                 | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n multinode-975382 sudo cat                                       | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | /home/docker/cp-test_multinode-975382-m03_multinode-975382.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-975382 cp multinode-975382-m03:/home/docker/cp-test.txt                       | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m02:/home/docker/cp-test_multinode-975382-m03_multinode-975382-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n                                                                 | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | multinode-975382-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-975382 ssh -n multinode-975382-m02 sudo cat                                   | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | /home/docker/cp-test_multinode-975382-m03_multinode-975382-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-975382 node stop m03                                                          | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	| node    | multinode-975382 node start                                                             | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC | 15 Jan 24 09:52 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-975382                                                                | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC |                     |
	| stop    | -p multinode-975382                                                                     | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:52 UTC |                     |
	| start   | -p multinode-975382                                                                     | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 09:54 UTC | 15 Jan 24 10:04 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-975382                                                                | multinode-975382 | jenkins | v1.32.0 | 15 Jan 24 10:04 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:54:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:54:53.468731   29671 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:54:53.468836   29671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:54:53.468845   29671 out.go:309] Setting ErrFile to fd 2...
	I0115 09:54:53.468850   29671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:54:53.469041   29671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 09:54:53.469552   29671 out.go:303] Setting JSON to false
	I0115 09:54:53.470371   29671 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2194,"bootTime":1705310300,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:54:53.470461   29671 start.go:138] virtualization: kvm guest
	I0115 09:54:53.472984   29671 out.go:177] * [multinode-975382] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:54:53.475129   29671 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:54:53.476621   29671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:54:53.475127   29671 notify.go:220] Checking for updates...
	I0115 09:54:53.479594   29671 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:54:53.481083   29671 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:54:53.482524   29671 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:54:53.483873   29671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:54:53.485773   29671 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:54:53.485848   29671 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:54:53.486217   29671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:54:53.486259   29671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:54:53.500437   29671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45065
	I0115 09:54:53.500767   29671 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:54:53.501313   29671 main.go:141] libmachine: Using API Version  1
	I0115 09:54:53.501340   29671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:54:53.501645   29671 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:54:53.501818   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:54:53.536145   29671 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 09:54:53.537546   29671 start.go:298] selected driver: kvm2
	I0115 09:54:53.537561   29671 start.go:902] validating driver "kvm2" against &{Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:54:53.537670   29671 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:54:53.537952   29671 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:54:53.538014   29671 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 09:54:53.551752   29671 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 09:54:53.552382   29671 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 09:54:53.552437   29671 cni.go:84] Creating CNI manager for ""
	I0115 09:54:53.552448   29671 cni.go:136] 3 nodes found, recommending kindnet
	I0115 09:54:53.552455   29671 start_flags.go:321] config:
	{Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:54:53.552640   29671 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:54:53.554579   29671 out.go:177] * Starting control plane node multinode-975382 in cluster multinode-975382
	I0115 09:54:53.555984   29671 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:54:53.556017   29671 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 09:54:53.556028   29671 cache.go:56] Caching tarball of preloaded images
	I0115 09:54:53.556096   29671 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 09:54:53.556105   29671 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 09:54:53.556215   29671 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 09:54:53.556382   29671 start.go:365] acquiring machines lock for multinode-975382: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 09:54:53.556418   29671 start.go:369] acquired machines lock for "multinode-975382" in 20.347µs
	I0115 09:54:53.556430   29671 start.go:96] Skipping create...Using existing machine configuration
	I0115 09:54:53.556437   29671 fix.go:54] fixHost starting: 
	I0115 09:54:53.556663   29671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:54:53.556690   29671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:54:53.569444   29671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39379
	I0115 09:54:53.569835   29671 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:54:53.570300   29671 main.go:141] libmachine: Using API Version  1
	I0115 09:54:53.570322   29671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:54:53.570665   29671 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:54:53.570822   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:54:53.570964   29671 main.go:141] libmachine: (multinode-975382) Calling .GetState
	I0115 09:54:53.572415   29671 fix.go:102] recreateIfNeeded on multinode-975382: state=Running err=<nil>
	W0115 09:54:53.572434   29671 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 09:54:53.574526   29671 out.go:177] * Updating the running kvm2 "multinode-975382" VM ...
	I0115 09:54:53.575898   29671 machine.go:88] provisioning docker machine ...
	I0115 09:54:53.575919   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:54:53.576110   29671 main.go:141] libmachine: (multinode-975382) Calling .GetMachineName
	I0115 09:54:53.576246   29671 buildroot.go:166] provisioning hostname "multinode-975382"
	I0115 09:54:53.576274   29671 main.go:141] libmachine: (multinode-975382) Calling .GetMachineName
	I0115 09:54:53.576402   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:54:53.578579   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:54:53.578922   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:54:53.578949   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:54:53.579080   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:54:53.579239   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:54:53.579387   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:54:53.579521   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:54:53.579650   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 09:54:53.579982   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 09:54:53.580000   29671 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-975382 && echo "multinode-975382" | sudo tee /etc/hostname
	I0115 09:55:11.998695   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:18.078692   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:21.150673   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:27.230689   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:30.302667   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:36.382758   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:39.454674   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:45.534661   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:48.606663   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:54.686741   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:55:57.758697   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:03.838707   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:06.910690   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:12.990697   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:16.062737   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:22.142705   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:25.214670   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:31.294660   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:34.366612   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:40.446639   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:43.518672   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:49.598692   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:52.670664   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:56:58.750745   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:01.822747   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:07.902665   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:10.974713   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:17.054720   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:20.126648   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:26.206683   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:29.278676   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:35.358744   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:38.430690   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:44.510666   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:47.582707   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:53.662685   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:57:56.734635   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:02.814704   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:05.886689   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:11.966712   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:15.038685   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:21.118707   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:24.194626   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:30.270682   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:33.342676   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:39.422702   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:42.494668   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:48.574635   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:51.646685   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:58:57.726673   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:00.798671   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:06.878692   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:09.950725   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:16.030670   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:19.102624   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:25.182665   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:28.254709   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:34.334723   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:37.406721   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:43.486705   29671 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.217:22: connect: no route to host
	I0115 09:59:46.488790   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:59:46.488851   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:59:46.490945   29671 machine.go:91] provisioned docker machine in 4m52.915027237s
	I0115 09:59:46.490997   29671 fix.go:56] fixHost completed within 4m52.93455973s
	I0115 09:59:46.491002   29671 start.go:83] releasing machines lock for "multinode-975382", held for 4m52.934576026s
	W0115 09:59:46.491018   29671 start.go:694] error starting host: provision: host is not running
	W0115 09:59:46.491202   29671 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0115 09:59:46.491221   29671 start.go:709] Will try again in 5 seconds ...
	I0115 09:59:51.494037   29671 start.go:365] acquiring machines lock for multinode-975382: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 09:59:51.494163   29671 start.go:369] acquired machines lock for "multinode-975382" in 84.271µs
	I0115 09:59:51.494184   29671 start.go:96] Skipping create...Using existing machine configuration
	I0115 09:59:51.494189   29671 fix.go:54] fixHost starting: 
	I0115 09:59:51.494498   29671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:59:51.494521   29671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:59:51.509173   29671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42585
	I0115 09:59:51.509637   29671 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:59:51.510046   29671 main.go:141] libmachine: Using API Version  1
	I0115 09:59:51.510067   29671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:59:51.510401   29671 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:59:51.510607   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:59:51.510761   29671 main.go:141] libmachine: (multinode-975382) Calling .GetState
	I0115 09:59:51.512357   29671 fix.go:102] recreateIfNeeded on multinode-975382: state=Stopped err=<nil>
	I0115 09:59:51.512381   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	W0115 09:59:51.512533   29671 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 09:59:51.514761   29671 out.go:177] * Restarting existing kvm2 VM for "multinode-975382" ...
	I0115 09:59:51.516391   29671 main.go:141] libmachine: (multinode-975382) Calling .Start
	I0115 09:59:51.516566   29671 main.go:141] libmachine: (multinode-975382) Ensuring networks are active...
	I0115 09:59:51.517400   29671 main.go:141] libmachine: (multinode-975382) Ensuring network default is active
	I0115 09:59:51.517855   29671 main.go:141] libmachine: (multinode-975382) Ensuring network mk-multinode-975382 is active
	I0115 09:59:51.518277   29671 main.go:141] libmachine: (multinode-975382) Getting domain xml...
	I0115 09:59:51.519036   29671 main.go:141] libmachine: (multinode-975382) Creating domain...
	I0115 09:59:52.690541   29671 main.go:141] libmachine: (multinode-975382) Waiting to get IP...
	I0115 09:59:52.691375   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:52.691773   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:52.691862   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:52.691769   30501 retry.go:31] will retry after 194.90644ms: waiting for machine to come up
	I0115 09:59:52.888353   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:52.888821   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:52.888845   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:52.888784   30501 retry.go:31] will retry after 291.45816ms: waiting for machine to come up
	I0115 09:59:53.182136   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:53.182551   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:53.182578   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:53.182514   30501 retry.go:31] will retry after 467.187665ms: waiting for machine to come up
	I0115 09:59:53.651034   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:53.651426   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:53.651478   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:53.651400   30501 retry.go:31] will retry after 582.552827ms: waiting for machine to come up
	I0115 09:59:54.235029   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:54.235428   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:54.235457   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:54.235388   30501 retry.go:31] will retry after 489.288651ms: waiting for machine to come up
	I0115 09:59:54.726177   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:54.726748   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:54.726780   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:54.726709   30501 retry.go:31] will retry after 622.758866ms: waiting for machine to come up
	I0115 09:59:55.351455   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:55.351860   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:55.351892   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:55.351815   30501 retry.go:31] will retry after 776.128242ms: waiting for machine to come up
	I0115 09:59:56.129670   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:56.130103   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:56.130126   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:56.130056   30501 retry.go:31] will retry after 1.232270123s: waiting for machine to come up
	I0115 09:59:57.363936   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:57.364400   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:57.364443   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:57.364371   30501 retry.go:31] will retry after 1.392190308s: waiting for machine to come up
	I0115 09:59:58.758900   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:59:58.759382   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 09:59:58.759409   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 09:59:58.759334   30501 retry.go:31] will retry after 1.577682632s: waiting for machine to come up
	I0115 10:00:00.339135   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:00.339656   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 10:00:00.339675   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 10:00:00.339612   30501 retry.go:31] will retry after 1.961002084s: waiting for machine to come up
	I0115 10:00:02.301988   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:02.302439   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 10:00:02.302465   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 10:00:02.302379   30501 retry.go:31] will retry after 3.151702768s: waiting for machine to come up
	I0115 10:00:05.455425   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:05.455837   29671 main.go:141] libmachine: (multinode-975382) DBG | unable to find current IP address of domain multinode-975382 in network mk-multinode-975382
	I0115 10:00:05.455867   29671 main.go:141] libmachine: (multinode-975382) DBG | I0115 10:00:05.455789   30501 retry.go:31] will retry after 4.048387635s: waiting for machine to come up
	I0115 10:00:09.508118   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.508499   29671 main.go:141] libmachine: (multinode-975382) Found IP for machine: 192.168.39.217
	I0115 10:00:09.508532   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has current primary IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.508543   29671 main.go:141] libmachine: (multinode-975382) Reserving static IP address...
	I0115 10:00:09.508961   29671 main.go:141] libmachine: (multinode-975382) Reserved static IP address: 192.168.39.217
	I0115 10:00:09.508989   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "multinode-975382", mac: "52:54:00:39:66:0a", ip: "192.168.39.217"} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:09.509008   29671 main.go:141] libmachine: (multinode-975382) Waiting for SSH to be available...
	I0115 10:00:09.509032   29671 main.go:141] libmachine: (multinode-975382) DBG | skip adding static IP to network mk-multinode-975382 - found existing host DHCP lease matching {name: "multinode-975382", mac: "52:54:00:39:66:0a", ip: "192.168.39.217"}
	I0115 10:00:09.509048   29671 main.go:141] libmachine: (multinode-975382) DBG | Getting to WaitForSSH function...
	I0115 10:00:09.511220   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.511583   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:09.511610   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.511766   29671 main.go:141] libmachine: (multinode-975382) DBG | Using SSH client type: external
	I0115 10:00:09.511802   29671 main.go:141] libmachine: (multinode-975382) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa (-rw-------)
	I0115 10:00:09.511827   29671 main.go:141] libmachine: (multinode-975382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:00:09.511848   29671 main.go:141] libmachine: (multinode-975382) DBG | About to run SSH command:
	I0115 10:00:09.511866   29671 main.go:141] libmachine: (multinode-975382) DBG | exit 0
	I0115 10:00:09.598382   29671 main.go:141] libmachine: (multinode-975382) DBG | SSH cmd err, output: <nil>: 
	I0115 10:00:09.598879   29671 main.go:141] libmachine: (multinode-975382) Calling .GetConfigRaw
	I0115 10:00:09.599621   29671 main.go:141] libmachine: (multinode-975382) Calling .GetIP
	I0115 10:00:09.602021   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.602360   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:09.602389   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.602653   29671 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 10:00:09.602847   29671 machine.go:88] provisioning docker machine ...
	I0115 10:00:09.602864   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:00:09.603062   29671 main.go:141] libmachine: (multinode-975382) Calling .GetMachineName
	I0115 10:00:09.603221   29671 buildroot.go:166] provisioning hostname "multinode-975382"
	I0115 10:00:09.603240   29671 main.go:141] libmachine: (multinode-975382) Calling .GetMachineName
	I0115 10:00:09.603400   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:00:09.605428   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.605760   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:09.605787   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.605894   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:00:09.606093   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:09.606276   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:09.606472   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:00:09.606683   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:00:09.607052   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 10:00:09.607067   29671 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-975382 && echo "multinode-975382" | sudo tee /etc/hostname
	I0115 10:00:09.728798   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-975382
	
	I0115 10:00:09.728824   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:00:09.731737   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.732213   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:09.732245   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.732471   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:00:09.732676   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:09.732840   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:09.732966   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:00:09.733099   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:00:09.733397   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 10:00:09.733416   29671 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-975382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-975382/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-975382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:00:09.848066   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:00:09.848127   29671 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:00:09.848151   29671 buildroot.go:174] setting up certificates
	I0115 10:00:09.848163   29671 provision.go:83] configureAuth start
	I0115 10:00:09.848173   29671 main.go:141] libmachine: (multinode-975382) Calling .GetMachineName
	I0115 10:00:09.848445   29671 main.go:141] libmachine: (multinode-975382) Calling .GetIP
	I0115 10:00:09.850737   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.851071   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:09.851093   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.851212   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:00:09.853355   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.853699   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:09.853727   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.853864   29671 provision.go:138] copyHostCerts
	I0115 10:00:09.853895   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:00:09.853924   29671 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:00:09.853935   29671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:00:09.854007   29671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:00:09.854086   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:00:09.854110   29671 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:00:09.854118   29671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:00:09.854144   29671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:00:09.854192   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:00:09.854209   29671 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:00:09.854217   29671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:00:09.854244   29671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:00:09.854338   29671 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.multinode-975382 san=[192.168.39.217 192.168.39.217 localhost 127.0.0.1 minikube multinode-975382]
	I0115 10:00:09.924688   29671 provision.go:172] copyRemoteCerts
	I0115 10:00:09.924742   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:00:09.924763   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:00:09.927409   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.927747   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:09.927771   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:09.927946   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:00:09.928146   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:09.928272   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:00:09.928405   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 10:00:10.011709   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 10:00:10.011787   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:00:10.034696   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 10:00:10.034752   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0115 10:00:10.059790   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 10:00:10.059843   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:00:10.084909   29671 provision.go:86] duration metric: configureAuth took 236.733018ms
	I0115 10:00:10.084945   29671 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:00:10.085150   29671 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:00:10.085223   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:00:10.087608   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.087924   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:10.087948   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.088088   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:00:10.088267   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:10.088453   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:10.088584   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:00:10.088749   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:00:10.089098   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 10:00:10.089114   29671 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:00:10.385041   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:00:10.385071   29671 machine.go:91] provisioned docker machine in 782.209538ms
	I0115 10:00:10.385082   29671 start.go:300] post-start starting for "multinode-975382" (driver="kvm2")
	I0115 10:00:10.385097   29671 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:00:10.385121   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:00:10.385420   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:00:10.385460   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:00:10.387886   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.388291   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:10.388328   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.388438   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:00:10.388658   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:10.388816   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:00:10.388958   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 10:00:10.471523   29671 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:00:10.475754   29671 command_runner.go:130] > NAME=Buildroot
	I0115 10:00:10.475780   29671 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0115 10:00:10.475787   29671 command_runner.go:130] > ID=buildroot
	I0115 10:00:10.475803   29671 command_runner.go:130] > VERSION_ID=2021.02.12
	I0115 10:00:10.475813   29671 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0115 10:00:10.475843   29671 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:00:10.475856   29671 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:00:10.475911   29671 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:00:10.475988   29671 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:00:10.476012   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /etc/ssl/certs/134822.pem
	I0115 10:00:10.476088   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:00:10.484620   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:00:10.507367   29671 start.go:303] post-start completed in 122.269597ms
	I0115 10:00:10.507387   29671 fix.go:56] fixHost completed within 19.013197522s
	I0115 10:00:10.507410   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:00:10.509950   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.510349   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:10.510392   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.510551   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:00:10.510711   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:10.510872   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:10.511050   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:00:10.511207   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:00:10.511546   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0115 10:00:10.511556   29671 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:00:10.619032   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705312810.569450088
	
	I0115 10:00:10.619050   29671 fix.go:206] guest clock: 1705312810.569450088
	I0115 10:00:10.619059   29671 fix.go:219] Guest: 2024-01-15 10:00:10.569450088 +0000 UTC Remote: 2024-01-15 10:00:10.507392114 +0000 UTC m=+317.085877752 (delta=62.057974ms)
	I0115 10:00:10.619084   29671 fix.go:190] guest clock delta is within tolerance: 62.057974ms
	I0115 10:00:10.619091   29671 start.go:83] releasing machines lock for "multinode-975382", held for 19.124916315s
	I0115 10:00:10.619109   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:00:10.619383   29671 main.go:141] libmachine: (multinode-975382) Calling .GetIP
	I0115 10:00:10.621936   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.622333   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:10.622366   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.622505   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:00:10.622947   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:00:10.623095   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:00:10.623178   29671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:00:10.623216   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:00:10.623292   29671 ssh_runner.go:195] Run: cat /version.json
	I0115 10:00:10.623315   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:00:10.625629   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.625876   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.626034   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:10.626061   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.626165   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:00:10.626280   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:10.626304   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:10.626328   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:10.626485   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:00:10.626565   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:00:10.626634   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:00:10.626702   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 10:00:10.626756   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:00:10.626870   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 10:00:10.732313   29671 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0115 10:00:10.732416   29671 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0115 10:00:10.732533   29671 ssh_runner.go:195] Run: systemctl --version
	I0115 10:00:10.737678   29671 command_runner.go:130] > systemd 247 (247)
	I0115 10:00:10.737726   29671 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0115 10:00:10.737962   29671 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:00:10.879658   29671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 10:00:10.885786   29671 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0115 10:00:10.886097   29671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:00:10.886169   29671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:00:10.901169   29671 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0115 10:00:10.901397   29671 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:00:10.901415   29671 start.go:475] detecting cgroup driver to use...
	I0115 10:00:10.901502   29671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:00:10.914924   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:00:10.927007   29671 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:00:10.927070   29671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:00:10.939940   29671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:00:10.952028   29671 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:00:10.964934   29671 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0115 10:00:11.057898   29671 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:00:11.071753   29671 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0115 10:00:11.169209   29671 docker.go:233] disabling docker service ...
	I0115 10:00:11.169281   29671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:00:11.181714   29671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:00:11.193297   29671 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0115 10:00:11.193374   29671 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:00:11.206177   29671 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0115 10:00:11.292685   29671 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:00:11.304623   29671 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0115 10:00:11.304908   29671 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0115 10:00:11.393918   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:00:11.407195   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:00:11.423887   29671 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0115 10:00:11.424175   29671 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:00:11.424233   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:00:11.433512   29671 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:00:11.433563   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:00:11.442366   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:00:11.451113   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:00:11.459825   29671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:00:11.468875   29671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:00:11.476502   29671 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:00:11.476535   29671 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:00:11.476568   29671 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:00:11.489148   29671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:00:11.497014   29671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:00:11.596030   29671 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:00:11.761758   29671 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:00:11.761832   29671 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:00:11.766938   29671 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0115 10:00:11.766962   29671 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0115 10:00:11.766972   29671 command_runner.go:130] > Device: 16h/22d	Inode: 759         Links: 1
	I0115 10:00:11.766983   29671 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 10:00:11.766991   29671 command_runner.go:130] > Access: 2024-01-15 10:00:11.697236172 +0000
	I0115 10:00:11.767002   29671 command_runner.go:130] > Modify: 2024-01-15 10:00:11.697236172 +0000
	I0115 10:00:11.767010   29671 command_runner.go:130] > Change: 2024-01-15 10:00:11.697236172 +0000
	I0115 10:00:11.767017   29671 command_runner.go:130] >  Birth: -
	I0115 10:00:11.767041   29671 start.go:543] Will wait 60s for crictl version
	I0115 10:00:11.767082   29671 ssh_runner.go:195] Run: which crictl
	I0115 10:00:11.770911   29671 command_runner.go:130] > /usr/bin/crictl
	I0115 10:00:11.770975   29671 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:00:11.806388   29671 command_runner.go:130] > Version:  0.1.0
	I0115 10:00:11.806408   29671 command_runner.go:130] > RuntimeName:  cri-o
	I0115 10:00:11.806433   29671 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0115 10:00:11.806442   29671 command_runner.go:130] > RuntimeApiVersion:  v1
	I0115 10:00:11.808038   29671 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:00:11.808123   29671 ssh_runner.go:195] Run: crio --version
	I0115 10:00:11.850170   29671 command_runner.go:130] > crio version 1.24.1
	I0115 10:00:11.850192   29671 command_runner.go:130] > Version:          1.24.1
	I0115 10:00:11.850199   29671 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 10:00:11.850203   29671 command_runner.go:130] > GitTreeState:     dirty
	I0115 10:00:11.850210   29671 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 10:00:11.850218   29671 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 10:00:11.850225   29671 command_runner.go:130] > Compiler:         gc
	I0115 10:00:11.850233   29671 command_runner.go:130] > Platform:         linux/amd64
	I0115 10:00:11.850252   29671 command_runner.go:130] > Linkmode:         dynamic
	I0115 10:00:11.850265   29671 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 10:00:11.850270   29671 command_runner.go:130] > SeccompEnabled:   true
	I0115 10:00:11.850274   29671 command_runner.go:130] > AppArmorEnabled:  false
	I0115 10:00:11.851633   29671 ssh_runner.go:195] Run: crio --version
	I0115 10:00:11.893956   29671 command_runner.go:130] > crio version 1.24.1
	I0115 10:00:11.893977   29671 command_runner.go:130] > Version:          1.24.1
	I0115 10:00:11.893984   29671 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 10:00:11.893988   29671 command_runner.go:130] > GitTreeState:     dirty
	I0115 10:00:11.893994   29671 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 10:00:11.893999   29671 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 10:00:11.894003   29671 command_runner.go:130] > Compiler:         gc
	I0115 10:00:11.894007   29671 command_runner.go:130] > Platform:         linux/amd64
	I0115 10:00:11.894012   29671 command_runner.go:130] > Linkmode:         dynamic
	I0115 10:00:11.894019   29671 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 10:00:11.894025   29671 command_runner.go:130] > SeccompEnabled:   true
	I0115 10:00:11.894029   29671 command_runner.go:130] > AppArmorEnabled:  false
	I0115 10:00:11.898587   29671 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:00:11.899892   29671 main.go:141] libmachine: (multinode-975382) Calling .GetIP
	I0115 10:00:11.902580   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:11.902976   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:00:11.903008   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:00:11.903170   29671 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 10:00:11.907274   29671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:00:11.920406   29671 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:00:11.920451   29671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:00:11.962277   29671 command_runner.go:130] > {
	I0115 10:00:11.962295   29671 command_runner.go:130] >   "images": [
	I0115 10:00:11.962300   29671 command_runner.go:130] >     {
	I0115 10:00:11.962307   29671 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0115 10:00:11.962312   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:11.962321   29671 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0115 10:00:11.962324   29671 command_runner.go:130] >       ],
	I0115 10:00:11.962329   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:11.962337   29671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0115 10:00:11.962344   29671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0115 10:00:11.962347   29671 command_runner.go:130] >       ],
	I0115 10:00:11.962352   29671 command_runner.go:130] >       "size": "750414",
	I0115 10:00:11.962356   29671 command_runner.go:130] >       "uid": {
	I0115 10:00:11.962360   29671 command_runner.go:130] >         "value": "65535"
	I0115 10:00:11.962366   29671 command_runner.go:130] >       },
	I0115 10:00:11.962370   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:11.962376   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:11.962380   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:11.962384   29671 command_runner.go:130] >     }
	I0115 10:00:11.962389   29671 command_runner.go:130] >   ]
	I0115 10:00:11.962392   29671 command_runner.go:130] > }
	I0115 10:00:11.963559   29671 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:00:11.963603   29671 ssh_runner.go:195] Run: which lz4
	I0115 10:00:11.967455   29671 command_runner.go:130] > /usr/bin/lz4
	I0115 10:00:11.967814   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0115 10:00:11.967894   29671 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:00:11.971940   29671 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:00:11.972182   29671 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:00:11.972202   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:00:13.753363   29671 crio.go:444] Took 1.785490 seconds to copy over tarball
	I0115 10:00:13.753510   29671 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:00:16.707885   29671 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.954337395s)
	I0115 10:00:16.707916   29671 crio.go:451] Took 2.954464 seconds to extract the tarball
	I0115 10:00:16.707927   29671 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:00:16.748514   29671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:00:16.796351   29671 command_runner.go:130] > {
	I0115 10:00:16.796370   29671 command_runner.go:130] >   "images": [
	I0115 10:00:16.796374   29671 command_runner.go:130] >     {
	I0115 10:00:16.796386   29671 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0115 10:00:16.796391   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:16.796397   29671 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0115 10:00:16.796401   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796405   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:16.796413   29671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0115 10:00:16.796420   29671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0115 10:00:16.796425   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796429   29671 command_runner.go:130] >       "size": "65258016",
	I0115 10:00:16.796436   29671 command_runner.go:130] >       "uid": null,
	I0115 10:00:16.796442   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:16.796451   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:16.796456   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:16.796459   29671 command_runner.go:130] >     },
	I0115 10:00:16.796463   29671 command_runner.go:130] >     {
	I0115 10:00:16.796472   29671 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0115 10:00:16.796476   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:16.796481   29671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0115 10:00:16.796487   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796492   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:16.796504   29671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0115 10:00:16.796514   29671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0115 10:00:16.796518   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796526   29671 command_runner.go:130] >       "size": "31470524",
	I0115 10:00:16.796533   29671 command_runner.go:130] >       "uid": null,
	I0115 10:00:16.796536   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:16.796540   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:16.796545   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:16.796554   29671 command_runner.go:130] >     },
	I0115 10:00:16.796558   29671 command_runner.go:130] >     {
	I0115 10:00:16.796564   29671 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0115 10:00:16.796569   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:16.796574   29671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0115 10:00:16.796577   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796582   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:16.796591   29671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0115 10:00:16.796600   29671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0115 10:00:16.796607   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796611   29671 command_runner.go:130] >       "size": "53621675",
	I0115 10:00:16.796615   29671 command_runner.go:130] >       "uid": null,
	I0115 10:00:16.796621   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:16.796625   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:16.796631   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:16.796635   29671 command_runner.go:130] >     },
	I0115 10:00:16.796639   29671 command_runner.go:130] >     {
	I0115 10:00:16.796647   29671 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0115 10:00:16.796651   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:16.796660   29671 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0115 10:00:16.796669   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796676   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:16.796689   29671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0115 10:00:16.796704   29671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0115 10:00:16.796719   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796726   29671 command_runner.go:130] >       "size": "295456551",
	I0115 10:00:16.796733   29671 command_runner.go:130] >       "uid": {
	I0115 10:00:16.796740   29671 command_runner.go:130] >         "value": "0"
	I0115 10:00:16.796743   29671 command_runner.go:130] >       },
	I0115 10:00:16.796750   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:16.796754   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:16.796761   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:16.796765   29671 command_runner.go:130] >     },
	I0115 10:00:16.796771   29671 command_runner.go:130] >     {
	I0115 10:00:16.796777   29671 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0115 10:00:16.796784   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:16.796789   29671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0115 10:00:16.796795   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796799   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:16.796806   29671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0115 10:00:16.796816   29671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0115 10:00:16.796820   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796824   29671 command_runner.go:130] >       "size": "127226832",
	I0115 10:00:16.796830   29671 command_runner.go:130] >       "uid": {
	I0115 10:00:16.796836   29671 command_runner.go:130] >         "value": "0"
	I0115 10:00:16.796842   29671 command_runner.go:130] >       },
	I0115 10:00:16.796846   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:16.796850   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:16.796856   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:16.796859   29671 command_runner.go:130] >     },
	I0115 10:00:16.796865   29671 command_runner.go:130] >     {
	I0115 10:00:16.796871   29671 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0115 10:00:16.796880   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:16.796885   29671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0115 10:00:16.796890   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796894   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:16.796904   29671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0115 10:00:16.796911   29671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0115 10:00:16.796917   29671 command_runner.go:130] >       ],
	I0115 10:00:16.796921   29671 command_runner.go:130] >       "size": "123261750",
	I0115 10:00:16.796926   29671 command_runner.go:130] >       "uid": {
	I0115 10:00:16.796931   29671 command_runner.go:130] >         "value": "0"
	I0115 10:00:16.796951   29671 command_runner.go:130] >       },
	I0115 10:00:16.796957   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:16.796962   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:16.796969   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:16.796978   29671 command_runner.go:130] >     },
	I0115 10:00:16.796984   29671 command_runner.go:130] >     {
	I0115 10:00:16.796994   29671 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0115 10:00:16.797004   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:16.797012   29671 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0115 10:00:16.797021   29671 command_runner.go:130] >       ],
	I0115 10:00:16.797028   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:16.797039   29671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0115 10:00:16.797054   29671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0115 10:00:16.797061   29671 command_runner.go:130] >       ],
	I0115 10:00:16.797065   29671 command_runner.go:130] >       "size": "74749335",
	I0115 10:00:16.797071   29671 command_runner.go:130] >       "uid": null,
	I0115 10:00:16.797076   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:16.797080   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:16.797089   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:16.797098   29671 command_runner.go:130] >     },
	I0115 10:00:16.797103   29671 command_runner.go:130] >     {
	I0115 10:00:16.797116   29671 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0115 10:00:16.797126   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:16.797136   29671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0115 10:00:16.797144   29671 command_runner.go:130] >       ],
	I0115 10:00:16.797151   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:16.797176   29671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0115 10:00:16.797190   29671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0115 10:00:16.797194   29671 command_runner.go:130] >       ],
	I0115 10:00:16.797198   29671 command_runner.go:130] >       "size": "61551410",
	I0115 10:00:16.797202   29671 command_runner.go:130] >       "uid": {
	I0115 10:00:16.797207   29671 command_runner.go:130] >         "value": "0"
	I0115 10:00:16.797211   29671 command_runner.go:130] >       },
	I0115 10:00:16.797218   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:16.797222   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:16.797226   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:16.797231   29671 command_runner.go:130] >     },
	I0115 10:00:16.797237   29671 command_runner.go:130] >     {
	I0115 10:00:16.797243   29671 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0115 10:00:16.797247   29671 command_runner.go:130] >       "repoTags": [
	I0115 10:00:16.797252   29671 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0115 10:00:16.797263   29671 command_runner.go:130] >       ],
	I0115 10:00:16.797267   29671 command_runner.go:130] >       "repoDigests": [
	I0115 10:00:16.797274   29671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0115 10:00:16.797281   29671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0115 10:00:16.797284   29671 command_runner.go:130] >       ],
	I0115 10:00:16.797288   29671 command_runner.go:130] >       "size": "750414",
	I0115 10:00:16.797292   29671 command_runner.go:130] >       "uid": {
	I0115 10:00:16.797296   29671 command_runner.go:130] >         "value": "65535"
	I0115 10:00:16.797299   29671 command_runner.go:130] >       },
	I0115 10:00:16.797305   29671 command_runner.go:130] >       "username": "",
	I0115 10:00:16.797311   29671 command_runner.go:130] >       "spec": null,
	I0115 10:00:16.797317   29671 command_runner.go:130] >       "pinned": false
	I0115 10:00:16.797326   29671 command_runner.go:130] >     }
	I0115 10:00:16.797335   29671 command_runner.go:130] >   ]
	I0115 10:00:16.797349   29671 command_runner.go:130] > }
	I0115 10:00:16.797497   29671 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:00:16.797510   29671 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:00:16.797576   29671 ssh_runner.go:195] Run: crio config
	I0115 10:00:16.848516   29671 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0115 10:00:16.848546   29671 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0115 10:00:16.848556   29671 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0115 10:00:16.848563   29671 command_runner.go:130] > #
	I0115 10:00:16.848585   29671 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0115 10:00:16.848595   29671 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0115 10:00:16.848605   29671 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0115 10:00:16.848622   29671 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0115 10:00:16.848630   29671 command_runner.go:130] > # reload'.
	I0115 10:00:16.848640   29671 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0115 10:00:16.848656   29671 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0115 10:00:16.848666   29671 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0115 10:00:16.848684   29671 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0115 10:00:16.848690   29671 command_runner.go:130] > [crio]
	I0115 10:00:16.848697   29671 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0115 10:00:16.848704   29671 command_runner.go:130] > # containers images, in this directory.
	I0115 10:00:16.848714   29671 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0115 10:00:16.848728   29671 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0115 10:00:16.848743   29671 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0115 10:00:16.848754   29671 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0115 10:00:16.848766   29671 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0115 10:00:16.848911   29671 command_runner.go:130] > storage_driver = "overlay"
	I0115 10:00:16.848926   29671 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0115 10:00:16.848933   29671 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0115 10:00:16.848941   29671 command_runner.go:130] > storage_option = [
	I0115 10:00:16.849080   29671 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0115 10:00:16.849193   29671 command_runner.go:130] > ]
	I0115 10:00:16.849203   29671 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0115 10:00:16.849209   29671 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0115 10:00:16.849565   29671 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0115 10:00:16.849574   29671 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0115 10:00:16.849580   29671 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0115 10:00:16.849587   29671 command_runner.go:130] > # always happen on a node reboot
	I0115 10:00:16.849993   29671 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0115 10:00:16.850001   29671 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0115 10:00:16.850008   29671 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0115 10:00:16.850021   29671 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0115 10:00:16.850428   29671 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0115 10:00:16.850446   29671 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0115 10:00:16.850461   29671 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0115 10:00:16.850961   29671 command_runner.go:130] > # internal_wipe = true
	I0115 10:00:16.850976   29671 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0115 10:00:16.850993   29671 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0115 10:00:16.851006   29671 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0115 10:00:16.851443   29671 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0115 10:00:16.851458   29671 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0115 10:00:16.851468   29671 command_runner.go:130] > [crio.api]
	I0115 10:00:16.851478   29671 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0115 10:00:16.851827   29671 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0115 10:00:16.851842   29671 command_runner.go:130] > # IP address on which the stream server will listen.
	I0115 10:00:16.852259   29671 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0115 10:00:16.852275   29671 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0115 10:00:16.852287   29671 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0115 10:00:16.852677   29671 command_runner.go:130] > # stream_port = "0"
	I0115 10:00:16.852692   29671 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0115 10:00:16.853124   29671 command_runner.go:130] > # stream_enable_tls = false
	I0115 10:00:16.853140   29671 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0115 10:00:16.853427   29671 command_runner.go:130] > # stream_idle_timeout = ""
	I0115 10:00:16.853443   29671 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0115 10:00:16.853465   29671 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0115 10:00:16.853475   29671 command_runner.go:130] > # minutes.
	I0115 10:00:16.853701   29671 command_runner.go:130] > # stream_tls_cert = ""
	I0115 10:00:16.853719   29671 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0115 10:00:16.853729   29671 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0115 10:00:16.853994   29671 command_runner.go:130] > # stream_tls_key = ""
	I0115 10:00:16.854011   29671 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0115 10:00:16.854021   29671 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0115 10:00:16.854031   29671 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0115 10:00:16.854373   29671 command_runner.go:130] > # stream_tls_ca = ""
	I0115 10:00:16.854391   29671 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 10:00:16.854593   29671 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0115 10:00:16.854617   29671 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 10:00:16.854799   29671 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0115 10:00:16.854834   29671 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0115 10:00:16.854847   29671 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0115 10:00:16.854859   29671 command_runner.go:130] > [crio.runtime]
	I0115 10:00:16.854870   29671 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0115 10:00:16.854880   29671 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0115 10:00:16.854890   29671 command_runner.go:130] > # "nofile=1024:2048"
	I0115 10:00:16.854903   29671 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0115 10:00:16.854976   29671 command_runner.go:130] > # default_ulimits = [
	I0115 10:00:16.855261   29671 command_runner.go:130] > # ]
	I0115 10:00:16.855281   29671 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0115 10:00:16.855672   29671 command_runner.go:130] > # no_pivot = false
	I0115 10:00:16.855688   29671 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0115 10:00:16.855699   29671 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0115 10:00:16.856165   29671 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0115 10:00:16.856182   29671 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0115 10:00:16.856194   29671 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0115 10:00:16.856208   29671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 10:00:16.856247   29671 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0115 10:00:16.856260   29671 command_runner.go:130] > # Cgroup setting for conmon
	I0115 10:00:16.856274   29671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0115 10:00:16.856284   29671 command_runner.go:130] > conmon_cgroup = "pod"
	I0115 10:00:16.856295   29671 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0115 10:00:16.856306   29671 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0115 10:00:16.856318   29671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 10:00:16.856324   29671 command_runner.go:130] > conmon_env = [
	I0115 10:00:16.856341   29671 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0115 10:00:16.856351   29671 command_runner.go:130] > ]
	I0115 10:00:16.856360   29671 command_runner.go:130] > # Additional environment variables to set for all the
	I0115 10:00:16.856372   29671 command_runner.go:130] > # containers. These are overridden if set in the
	I0115 10:00:16.856383   29671 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0115 10:00:16.856394   29671 command_runner.go:130] > # default_env = [
	I0115 10:00:16.856403   29671 command_runner.go:130] > # ]
	I0115 10:00:16.856414   29671 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0115 10:00:16.856424   29671 command_runner.go:130] > # selinux = false
	I0115 10:00:16.856434   29671 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0115 10:00:16.856452   29671 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0115 10:00:16.856461   29671 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0115 10:00:16.856469   29671 command_runner.go:130] > # seccomp_profile = ""
	I0115 10:00:16.856478   29671 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0115 10:00:16.856489   29671 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0115 10:00:16.856498   29671 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0115 10:00:16.856503   29671 command_runner.go:130] > # which might increase security.
	I0115 10:00:16.856511   29671 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0115 10:00:16.856523   29671 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0115 10:00:16.856536   29671 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0115 10:00:16.856549   29671 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0115 10:00:16.856560   29671 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0115 10:00:16.856572   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:00:16.856581   29671 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0115 10:00:16.856591   29671 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0115 10:00:16.856602   29671 command_runner.go:130] > # the cgroup blockio controller.
	I0115 10:00:16.856609   29671 command_runner.go:130] > # blockio_config_file = ""
	I0115 10:00:16.856625   29671 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0115 10:00:16.856637   29671 command_runner.go:130] > # irqbalance daemon.
	I0115 10:00:16.856646   29671 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0115 10:00:16.856659   29671 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0115 10:00:16.856669   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:00:16.856680   29671 command_runner.go:130] > # rdt_config_file = ""
	I0115 10:00:16.856690   29671 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0115 10:00:16.856700   29671 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0115 10:00:16.856710   29671 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0115 10:00:16.856721   29671 command_runner.go:130] > # separate_pull_cgroup = ""
	I0115 10:00:16.856733   29671 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0115 10:00:16.856743   29671 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0115 10:00:16.856752   29671 command_runner.go:130] > # will be added.
	I0115 10:00:16.856759   29671 command_runner.go:130] > # default_capabilities = [
	I0115 10:00:16.856771   29671 command_runner.go:130] > # 	"CHOWN",
	I0115 10:00:16.856779   29671 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0115 10:00:16.856789   29671 command_runner.go:130] > # 	"FSETID",
	I0115 10:00:16.856796   29671 command_runner.go:130] > # 	"FOWNER",
	I0115 10:00:16.856805   29671 command_runner.go:130] > # 	"SETGID",
	I0115 10:00:16.856812   29671 command_runner.go:130] > # 	"SETUID",
	I0115 10:00:16.856822   29671 command_runner.go:130] > # 	"SETPCAP",
	I0115 10:00:16.856830   29671 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0115 10:00:16.856839   29671 command_runner.go:130] > # 	"KILL",
	I0115 10:00:16.856846   29671 command_runner.go:130] > # ]
	I0115 10:00:16.856859   29671 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0115 10:00:16.856873   29671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 10:00:16.856882   29671 command_runner.go:130] > # default_sysctls = [
	I0115 10:00:16.856886   29671 command_runner.go:130] > # ]
	I0115 10:00:16.856893   29671 command_runner.go:130] > # List of devices on the host that a
	I0115 10:00:16.856899   29671 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0115 10:00:16.856909   29671 command_runner.go:130] > # allowed_devices = [
	I0115 10:00:16.856916   29671 command_runner.go:130] > # 	"/dev/fuse",
	I0115 10:00:16.856926   29671 command_runner.go:130] > # ]
	I0115 10:00:16.856935   29671 command_runner.go:130] > # List of additional devices. specified as
	I0115 10:00:16.856951   29671 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0115 10:00:16.856961   29671 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0115 10:00:16.856996   29671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 10:00:16.857004   29671 command_runner.go:130] > # additional_devices = [
	I0115 10:00:16.857008   29671 command_runner.go:130] > # ]
	I0115 10:00:16.857013   29671 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0115 10:00:16.857019   29671 command_runner.go:130] > # cdi_spec_dirs = [
	I0115 10:00:16.857026   29671 command_runner.go:130] > # 	"/etc/cdi",
	I0115 10:00:16.857036   29671 command_runner.go:130] > # 	"/var/run/cdi",
	I0115 10:00:16.857046   29671 command_runner.go:130] > # ]
	I0115 10:00:16.857057   29671 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0115 10:00:16.857070   29671 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0115 10:00:16.857080   29671 command_runner.go:130] > # Defaults to false.
	I0115 10:00:16.857089   29671 command_runner.go:130] > # device_ownership_from_security_context = false
	I0115 10:00:16.857100   29671 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0115 10:00:16.857113   29671 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0115 10:00:16.857131   29671 command_runner.go:130] > # hooks_dir = [
	I0115 10:00:16.857144   29671 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0115 10:00:16.857154   29671 command_runner.go:130] > # ]
	I0115 10:00:16.857164   29671 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0115 10:00:16.857178   29671 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0115 10:00:16.857187   29671 command_runner.go:130] > # its default mounts from the following two files:
	I0115 10:00:16.857196   29671 command_runner.go:130] > #
	I0115 10:00:16.857207   29671 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0115 10:00:16.857221   29671 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0115 10:00:16.857234   29671 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0115 10:00:16.857243   29671 command_runner.go:130] > #
	I0115 10:00:16.857253   29671 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0115 10:00:16.857267   29671 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0115 10:00:16.857281   29671 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0115 10:00:16.857293   29671 command_runner.go:130] > #      only add mounts it finds in this file.
	I0115 10:00:16.857300   29671 command_runner.go:130] > #
	I0115 10:00:16.857308   29671 command_runner.go:130] > # default_mounts_file = ""
	I0115 10:00:16.857318   29671 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0115 10:00:16.857332   29671 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0115 10:00:16.857344   29671 command_runner.go:130] > pids_limit = 1024
	I0115 10:00:16.857358   29671 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0115 10:00:16.857368   29671 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0115 10:00:16.857382   29671 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0115 10:00:16.857398   29671 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0115 10:00:16.857409   29671 command_runner.go:130] > # log_size_max = -1
	I0115 10:00:16.857424   29671 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0115 10:00:16.857434   29671 command_runner.go:130] > # log_to_journald = false
	I0115 10:00:16.857446   29671 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0115 10:00:16.857458   29671 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0115 10:00:16.857470   29671 command_runner.go:130] > # Path to directory for container attach sockets.
	I0115 10:00:16.857483   29671 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0115 10:00:16.857496   29671 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0115 10:00:16.857507   29671 command_runner.go:130] > # bind_mount_prefix = ""
	I0115 10:00:16.857517   29671 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0115 10:00:16.857521   29671 command_runner.go:130] > # read_only = false
	I0115 10:00:16.857530   29671 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0115 10:00:16.857540   29671 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0115 10:00:16.857547   29671 command_runner.go:130] > # live configuration reload.
	I0115 10:00:16.857551   29671 command_runner.go:130] > # log_level = "info"
	I0115 10:00:16.857563   29671 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0115 10:00:16.857574   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:00:16.857584   29671 command_runner.go:130] > # log_filter = ""
	I0115 10:00:16.857597   29671 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0115 10:00:16.857610   29671 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0115 10:00:16.857620   29671 command_runner.go:130] > # separated by comma.
	I0115 10:00:16.857627   29671 command_runner.go:130] > # uid_mappings = ""
	I0115 10:00:16.857641   29671 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0115 10:00:16.857654   29671 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0115 10:00:16.857665   29671 command_runner.go:130] > # separated by comma.
	I0115 10:00:16.857671   29671 command_runner.go:130] > # gid_mappings = ""
	I0115 10:00:16.857684   29671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0115 10:00:16.857699   29671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 10:00:16.857712   29671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 10:00:16.857722   29671 command_runner.go:130] > # minimum_mappable_uid = -1
	I0115 10:00:16.857736   29671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0115 10:00:16.857751   29671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 10:00:16.857764   29671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 10:00:16.857775   29671 command_runner.go:130] > # minimum_mappable_gid = -1
	I0115 10:00:16.857788   29671 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0115 10:00:16.857800   29671 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0115 10:00:16.857812   29671 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0115 10:00:16.857822   29671 command_runner.go:130] > # ctr_stop_timeout = 30
	I0115 10:00:16.857833   29671 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0115 10:00:16.857847   29671 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0115 10:00:16.857858   29671 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0115 10:00:16.857870   29671 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0115 10:00:16.857879   29671 command_runner.go:130] > drop_infra_ctr = false
	I0115 10:00:16.857890   29671 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0115 10:00:16.857900   29671 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0115 10:00:16.857907   29671 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0115 10:00:16.857913   29671 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0115 10:00:16.857923   29671 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0115 10:00:16.857939   29671 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0115 10:00:16.857951   29671 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0115 10:00:16.857966   29671 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0115 10:00:16.857976   29671 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0115 10:00:16.857993   29671 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0115 10:00:16.858003   29671 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0115 10:00:16.858009   29671 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0115 10:00:16.858018   29671 command_runner.go:130] > # default_runtime = "runc"
	I0115 10:00:16.858024   29671 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0115 10:00:16.858033   29671 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0115 10:00:16.858041   29671 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0115 10:00:16.858049   29671 command_runner.go:130] > # creation as a file is not desired either.
	I0115 10:00:16.858057   29671 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0115 10:00:16.858063   29671 command_runner.go:130] > # the hostname is being managed dynamically.
	I0115 10:00:16.858069   29671 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0115 10:00:16.858073   29671 command_runner.go:130] > # ]
	I0115 10:00:16.858079   29671 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0115 10:00:16.858087   29671 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0115 10:00:16.858096   29671 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0115 10:00:16.858105   29671 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0115 10:00:16.858108   29671 command_runner.go:130] > #
	I0115 10:00:16.858113   29671 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0115 10:00:16.858118   29671 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0115 10:00:16.858123   29671 command_runner.go:130] > #  runtime_type = "oci"
	I0115 10:00:16.858127   29671 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0115 10:00:16.858135   29671 command_runner.go:130] > #  privileged_without_host_devices = false
	I0115 10:00:16.858139   29671 command_runner.go:130] > #  allowed_annotations = []
	I0115 10:00:16.858145   29671 command_runner.go:130] > # Where:
	I0115 10:00:16.858150   29671 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0115 10:00:16.858156   29671 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0115 10:00:16.858164   29671 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0115 10:00:16.858172   29671 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0115 10:00:16.858179   29671 command_runner.go:130] > #   in $PATH.
	I0115 10:00:16.858185   29671 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0115 10:00:16.858192   29671 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0115 10:00:16.858198   29671 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0115 10:00:16.858204   29671 command_runner.go:130] > #   state.
	I0115 10:00:16.858213   29671 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0115 10:00:16.858219   29671 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0115 10:00:16.858227   29671 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0115 10:00:16.858233   29671 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0115 10:00:16.858241   29671 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0115 10:00:16.858247   29671 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0115 10:00:16.858256   29671 command_runner.go:130] > #   The currently recognized values are:
	I0115 10:00:16.858263   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0115 10:00:16.858272   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0115 10:00:16.858277   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0115 10:00:16.858284   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0115 10:00:16.858292   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0115 10:00:16.858300   29671 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0115 10:00:16.858306   29671 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0115 10:00:16.858315   29671 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0115 10:00:16.858320   29671 command_runner.go:130] > #   should be moved to the container's cgroup
	I0115 10:00:16.858326   29671 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0115 10:00:16.858333   29671 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0115 10:00:16.858339   29671 command_runner.go:130] > runtime_type = "oci"
	I0115 10:00:16.858343   29671 command_runner.go:130] > runtime_root = "/run/runc"
	I0115 10:00:16.858350   29671 command_runner.go:130] > runtime_config_path = ""
	I0115 10:00:16.858354   29671 command_runner.go:130] > monitor_path = ""
	I0115 10:00:16.858360   29671 command_runner.go:130] > monitor_cgroup = ""
	I0115 10:00:16.858364   29671 command_runner.go:130] > monitor_exec_cgroup = ""
	I0115 10:00:16.858370   29671 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0115 10:00:16.858377   29671 command_runner.go:130] > # running containers
	I0115 10:00:16.858381   29671 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0115 10:00:16.858388   29671 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0115 10:00:16.858449   29671 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0115 10:00:16.858464   29671 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0115 10:00:16.858473   29671 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0115 10:00:16.858481   29671 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0115 10:00:16.858490   29671 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0115 10:00:16.858494   29671 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0115 10:00:16.858503   29671 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0115 10:00:16.858511   29671 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0115 10:00:16.858519   29671 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0115 10:00:16.858525   29671 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0115 10:00:16.858531   29671 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0115 10:00:16.858541   29671 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0115 10:00:16.858548   29671 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0115 10:00:16.858556   29671 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0115 10:00:16.858566   29671 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0115 10:00:16.858576   29671 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0115 10:00:16.858584   29671 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0115 10:00:16.858591   29671 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0115 10:00:16.858597   29671 command_runner.go:130] > # Example:
	I0115 10:00:16.858602   29671 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0115 10:00:16.858607   29671 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0115 10:00:16.858614   29671 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0115 10:00:16.858621   29671 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0115 10:00:16.858625   29671 command_runner.go:130] > # cpuset = 0
	I0115 10:00:16.858631   29671 command_runner.go:130] > # cpushares = "0-1"
	I0115 10:00:16.858640   29671 command_runner.go:130] > # Where:
	I0115 10:00:16.858647   29671 command_runner.go:130] > # The workload name is workload-type.
	I0115 10:00:16.858654   29671 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0115 10:00:16.858661   29671 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0115 10:00:16.858667   29671 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0115 10:00:16.858676   29671 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0115 10:00:16.858687   29671 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0115 10:00:16.858693   29671 command_runner.go:130] > # 
	I0115 10:00:16.858699   29671 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0115 10:00:16.858705   29671 command_runner.go:130] > #
	I0115 10:00:16.858710   29671 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0115 10:00:16.858717   29671 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0115 10:00:16.858723   29671 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0115 10:00:16.858730   29671 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0115 10:00:16.858736   29671 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0115 10:00:16.858742   29671 command_runner.go:130] > [crio.image]
	I0115 10:00:16.858749   29671 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0115 10:00:16.858756   29671 command_runner.go:130] > # default_transport = "docker://"
	I0115 10:00:16.858764   29671 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0115 10:00:16.858772   29671 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0115 10:00:16.858777   29671 command_runner.go:130] > # global_auth_file = ""
	I0115 10:00:16.858784   29671 command_runner.go:130] > # The image used to instantiate infra containers.
	I0115 10:00:16.858790   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:00:16.858797   29671 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0115 10:00:16.858803   29671 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0115 10:00:16.858812   29671 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0115 10:00:16.858819   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:00:16.858823   29671 command_runner.go:130] > # pause_image_auth_file = ""
	I0115 10:00:16.858832   29671 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0115 10:00:16.858838   29671 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0115 10:00:16.858846   29671 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0115 10:00:16.858851   29671 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0115 10:00:16.858856   29671 command_runner.go:130] > # pause_command = "/pause"
	I0115 10:00:16.858862   29671 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0115 10:00:16.858870   29671 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0115 10:00:16.858876   29671 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0115 10:00:16.858886   29671 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0115 10:00:16.858891   29671 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0115 10:00:16.858895   29671 command_runner.go:130] > # signature_policy = ""
	I0115 10:00:16.858900   29671 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0115 10:00:16.858906   29671 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0115 10:00:16.858910   29671 command_runner.go:130] > # changing them here.
	I0115 10:00:16.858914   29671 command_runner.go:130] > # insecure_registries = [
	I0115 10:00:16.858917   29671 command_runner.go:130] > # ]
	I0115 10:00:16.858923   29671 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0115 10:00:16.858927   29671 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0115 10:00:16.858931   29671 command_runner.go:130] > # image_volumes = "mkdir"
	I0115 10:00:16.858936   29671 command_runner.go:130] > # Temporary directory to use for storing big files
	I0115 10:00:16.858940   29671 command_runner.go:130] > # big_files_temporary_dir = ""
	I0115 10:00:16.858946   29671 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0115 10:00:16.858949   29671 command_runner.go:130] > # CNI plugins.
	I0115 10:00:16.858953   29671 command_runner.go:130] > [crio.network]
	I0115 10:00:16.858958   29671 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0115 10:00:16.858963   29671 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0115 10:00:16.858970   29671 command_runner.go:130] > # cni_default_network = ""
	I0115 10:00:16.858977   29671 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0115 10:00:16.858987   29671 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0115 10:00:16.858993   29671 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0115 10:00:16.858999   29671 command_runner.go:130] > # plugin_dirs = [
	I0115 10:00:16.859003   29671 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0115 10:00:16.859006   29671 command_runner.go:130] > # ]
	I0115 10:00:16.859012   29671 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0115 10:00:16.859018   29671 command_runner.go:130] > [crio.metrics]
	I0115 10:00:16.859023   29671 command_runner.go:130] > # Globally enable or disable metrics support.
	I0115 10:00:16.859029   29671 command_runner.go:130] > enable_metrics = true
	I0115 10:00:16.859035   29671 command_runner.go:130] > # Specify enabled metrics collectors.
	I0115 10:00:16.859043   29671 command_runner.go:130] > # Per default all metrics are enabled.
	I0115 10:00:16.859049   29671 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0115 10:00:16.859057   29671 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0115 10:00:16.859062   29671 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0115 10:00:16.859067   29671 command_runner.go:130] > # metrics_collectors = [
	I0115 10:00:16.859070   29671 command_runner.go:130] > # 	"operations",
	I0115 10:00:16.859077   29671 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0115 10:00:16.859084   29671 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0115 10:00:16.859088   29671 command_runner.go:130] > # 	"operations_errors",
	I0115 10:00:16.859095   29671 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0115 10:00:16.859099   29671 command_runner.go:130] > # 	"image_pulls_by_name",
	I0115 10:00:16.859103   29671 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0115 10:00:16.859110   29671 command_runner.go:130] > # 	"image_pulls_failures",
	I0115 10:00:16.859114   29671 command_runner.go:130] > # 	"image_pulls_successes",
	I0115 10:00:16.859119   29671 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0115 10:00:16.859125   29671 command_runner.go:130] > # 	"image_layer_reuse",
	I0115 10:00:16.859130   29671 command_runner.go:130] > # 	"containers_oom_total",
	I0115 10:00:16.859134   29671 command_runner.go:130] > # 	"containers_oom",
	I0115 10:00:16.859140   29671 command_runner.go:130] > # 	"processes_defunct",
	I0115 10:00:16.859144   29671 command_runner.go:130] > # 	"operations_total",
	I0115 10:00:16.859148   29671 command_runner.go:130] > # 	"operations_latency_seconds",
	I0115 10:00:16.859153   29671 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0115 10:00:16.859160   29671 command_runner.go:130] > # 	"operations_errors_total",
	I0115 10:00:16.859164   29671 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0115 10:00:16.859173   29671 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0115 10:00:16.859178   29671 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0115 10:00:16.859182   29671 command_runner.go:130] > # 	"image_pulls_success_total",
	I0115 10:00:16.859189   29671 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0115 10:00:16.859193   29671 command_runner.go:130] > # 	"containers_oom_count_total",
	I0115 10:00:16.859199   29671 command_runner.go:130] > # ]
	I0115 10:00:16.859204   29671 command_runner.go:130] > # The port on which the metrics server will listen.
	I0115 10:00:16.859208   29671 command_runner.go:130] > # metrics_port = 9090
	I0115 10:00:16.859214   29671 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0115 10:00:16.859218   29671 command_runner.go:130] > # metrics_socket = ""
	I0115 10:00:16.859224   29671 command_runner.go:130] > # The certificate for the secure metrics server.
	I0115 10:00:16.859230   29671 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0115 10:00:16.859238   29671 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0115 10:00:16.859243   29671 command_runner.go:130] > # certificate on any modification event.
	I0115 10:00:16.859249   29671 command_runner.go:130] > # metrics_cert = ""
	I0115 10:00:16.859254   29671 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0115 10:00:16.859261   29671 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0115 10:00:16.859265   29671 command_runner.go:130] > # metrics_key = ""
	I0115 10:00:16.859274   29671 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0115 10:00:16.859282   29671 command_runner.go:130] > [crio.tracing]
	I0115 10:00:16.859288   29671 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0115 10:00:16.859294   29671 command_runner.go:130] > # enable_tracing = false
	I0115 10:00:16.859299   29671 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0115 10:00:16.859306   29671 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0115 10:00:16.859311   29671 command_runner.go:130] > # Number of samples to collect per million spans.
	I0115 10:00:16.859316   29671 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0115 10:00:16.859325   29671 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0115 10:00:16.859329   29671 command_runner.go:130] > [crio.stats]
	I0115 10:00:16.859335   29671 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0115 10:00:16.859340   29671 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0115 10:00:16.859347   29671 command_runner.go:130] > # stats_collection_period = 0
	I0115 10:00:16.859790   29671 command_runner.go:130] ! time="2024-01-15 10:00:16.794703273Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0115 10:00:16.859809   29671 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0115 10:00:16.859884   29671 cni.go:84] Creating CNI manager for ""
	I0115 10:00:16.859895   29671 cni.go:136] 3 nodes found, recommending kindnet
	I0115 10:00:16.859910   29671 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:00:16.859934   29671 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-975382 NodeName:multinode-975382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:00:16.860080   29671 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-975382"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:00:16.860170   29671 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-975382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:00:16.860228   29671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:00:16.868817   29671 command_runner.go:130] > kubeadm
	I0115 10:00:16.868831   29671 command_runner.go:130] > kubectl
	I0115 10:00:16.868835   29671 command_runner.go:130] > kubelet
	I0115 10:00:16.868931   29671 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:00:16.868986   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:00:16.877325   29671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0115 10:00:16.892718   29671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:00:16.907870   29671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0115 10:00:16.923512   29671 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0115 10:00:16.927016   29671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:00:16.938727   29671 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382 for IP: 192.168.39.217
	I0115 10:00:16.938752   29671 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:00:16.938902   29671 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:00:16.938960   29671 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:00:16.939054   29671 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key
	I0115 10:00:16.939132   29671 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key.891f873f
	I0115 10:00:16.939184   29671 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.key
	I0115 10:00:16.939195   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 10:00:16.939210   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 10:00:16.939229   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 10:00:16.939246   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 10:00:16.939262   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 10:00:16.939283   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 10:00:16.939301   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 10:00:16.939319   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 10:00:16.939383   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:00:16.939424   29671 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:00:16.939438   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:00:16.939479   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:00:16.939523   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:00:16.939552   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:00:16.939608   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:00:16.939650   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /usr/share/ca-certificates/134822.pem
	I0115 10:00:16.939671   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:00:16.939687   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem -> /usr/share/ca-certificates/13482.pem
	I0115 10:00:16.940525   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:00:16.967500   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 10:00:16.991497   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:00:17.014355   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:00:17.036746   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:00:17.059353   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:00:17.082396   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:00:17.105654   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:00:17.130636   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:00:17.154606   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:00:17.182041   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:00:17.210264   29671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:00:17.228348   29671 ssh_runner.go:195] Run: openssl version
	I0115 10:00:17.234784   29671 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0115 10:00:17.235125   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:00:17.245523   29671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:00:17.250445   29671 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:00:17.250474   29671 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:00:17.250511   29671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:00:17.255907   29671 command_runner.go:130] > 3ec20f2e
	I0115 10:00:17.255977   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:00:17.266510   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:00:17.276826   29671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:00:17.281810   29671 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:00:17.281931   29671 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:00:17.281980   29671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:00:17.287864   29671 command_runner.go:130] > b5213941
	I0115 10:00:17.287942   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:00:17.298818   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:00:17.311799   29671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:00:17.316436   29671 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:00:17.316687   29671 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:00:17.316728   29671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:00:17.321937   29671 command_runner.go:130] > 51391683
	I0115 10:00:17.322152   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:00:17.332148   29671 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:00:17.336577   29671 command_runner.go:130] > ca.crt
	I0115 10:00:17.336594   29671 command_runner.go:130] > ca.key
	I0115 10:00:17.336601   29671 command_runner.go:130] > healthcheck-client.crt
	I0115 10:00:17.336609   29671 command_runner.go:130] > healthcheck-client.key
	I0115 10:00:17.336617   29671 command_runner.go:130] > peer.crt
	I0115 10:00:17.336623   29671 command_runner.go:130] > peer.key
	I0115 10:00:17.336629   29671 command_runner.go:130] > server.crt
	I0115 10:00:17.336639   29671 command_runner.go:130] > server.key
	I0115 10:00:17.336687   29671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:00:17.343076   29671 command_runner.go:130] > Certificate will not expire
	I0115 10:00:17.343126   29671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:00:17.348766   29671 command_runner.go:130] > Certificate will not expire
	I0115 10:00:17.349015   29671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:00:17.354641   29671 command_runner.go:130] > Certificate will not expire
	I0115 10:00:17.354968   29671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:00:17.360735   29671 command_runner.go:130] > Certificate will not expire
	I0115 10:00:17.360782   29671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:00:17.368028   29671 command_runner.go:130] > Certificate will not expire
	I0115 10:00:17.368113   29671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:00:17.375575   29671 command_runner.go:130] > Certificate will not expire
	I0115 10:00:17.375633   29671 kubeadm.go:404] StartCluster: {Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:00:17.375771   29671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:00:17.375822   29671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:00:17.422154   29671 cri.go:89] found id: ""
	I0115 10:00:17.422213   29671 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:00:17.432350   29671 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0115 10:00:17.432369   29671 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0115 10:00:17.432375   29671 command_runner.go:130] > /var/lib/minikube/etcd:
	I0115 10:00:17.432378   29671 command_runner.go:130] > member
	I0115 10:00:17.432641   29671 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:00:17.432659   29671 kubeadm.go:636] restartCluster start
	I0115 10:00:17.432710   29671 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:00:17.441811   29671 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:17.442322   29671 kubeconfig.go:92] found "multinode-975382" server: "https://192.168.39.217:8443"
	I0115 10:00:17.442791   29671 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:00:17.443049   29671 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 10:00:17.443632   29671 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 10:00:17.443820   29671 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:00:17.452466   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:17.452503   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:17.462855   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:17.953210   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:17.953296   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:17.964491   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:18.453144   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:18.453232   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:18.464549   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:18.953274   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:18.953355   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:18.964567   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:19.453227   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:19.453293   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:19.464357   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:19.952881   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:19.952976   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:19.965478   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:20.453009   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:20.453074   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:20.465022   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:20.953152   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:20.953220   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:20.964495   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:21.453067   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:21.453134   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:21.464145   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:21.953166   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:21.953240   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:21.964426   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:22.452937   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:22.453033   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:22.465171   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:22.952686   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:22.952760   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:22.965044   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:23.452708   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:23.452799   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:23.463724   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:23.953289   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:23.953381   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:23.964624   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:24.453251   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:24.453354   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:24.464585   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:24.952922   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:24.952991   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:24.963831   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:25.453431   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:25.453515   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:25.464601   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:25.953232   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:25.953317   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:25.964749   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:26.453357   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:26.453422   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:26.464673   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:26.952611   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:26.952697   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:26.964374   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:27.453124   29671 api_server.go:166] Checking apiserver status ...
	I0115 10:00:27.453204   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:00:27.464410   29671 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:00:27.464451   29671 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:00:27.464461   29671 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:00:27.464473   29671 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:00:27.464527   29671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:00:27.500018   29671 cri.go:89] found id: ""
	I0115 10:00:27.500096   29671 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:00:27.515218   29671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:00:27.524101   29671 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0115 10:00:27.524126   29671 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0115 10:00:27.524138   29671 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0115 10:00:27.524152   29671 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:00:27.524190   29671 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:00:27.524245   29671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:00:27.532638   29671 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:00:27.532670   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:00:27.632873   29671 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 10:00:27.634341   29671 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0115 10:00:27.635503   29671 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0115 10:00:27.636654   29671 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0115 10:00:27.638121   29671 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0115 10:00:27.639052   29671 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0115 10:00:27.640038   29671 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0115 10:00:27.640651   29671 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0115 10:00:27.641128   29671 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0115 10:00:27.641646   29671 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0115 10:00:27.642307   29671 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0115 10:00:27.643005   29671 command_runner.go:130] > [certs] Using the existing "sa" key
	I0115 10:00:27.644611   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:00:27.693067   29671 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 10:00:27.900906   29671 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 10:00:28.057542   29671 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 10:00:28.252430   29671 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 10:00:28.503804   29671 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 10:00:28.506494   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:00:28.569145   29671 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 10:00:28.570561   29671 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 10:00:28.570577   29671 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0115 10:00:28.706220   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:00:28.788924   29671 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 10:00:28.788967   29671 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 10:00:28.788979   29671 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 10:00:28.788990   29671 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 10:00:28.789014   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:00:28.864046   29671 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 10:00:28.867746   29671 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:00:28.867825   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:00:29.368732   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:00:29.868206   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:00:30.368155   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:00:30.868379   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:00:31.368646   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:00:31.392040   29671 command_runner.go:130] > 1103
	I0115 10:00:31.392112   29671 api_server.go:72] duration metric: took 2.524367958s to wait for apiserver process to appear ...
	I0115 10:00:31.392126   29671 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:00:31.392144   29671 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0115 10:00:35.171528   29671 api_server.go:279] https://192.168.39.217:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:00:35.171554   29671 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:00:35.171567   29671 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0115 10:00:35.205533   29671 api_server.go:279] https://192.168.39.217:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:00:35.205561   29671 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:00:35.392944   29671 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0115 10:00:35.398112   29671 api_server.go:279] https://192.168.39.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:00:35.398140   29671 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:00:35.892718   29671 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0115 10:00:35.897495   29671 api_server.go:279] https://192.168.39.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:00:35.897526   29671 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:00:36.393134   29671 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0115 10:00:36.398394   29671 api_server.go:279] https://192.168.39.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:00:36.398435   29671 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:00:36.892308   29671 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0115 10:00:36.898878   29671 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0115 10:00:36.898979   29671 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0115 10:00:36.898992   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:36.899003   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:36.899019   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:36.910853   29671 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0115 10:00:36.910878   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:36.910887   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:36.910896   29671 round_trippers.go:580]     Content-Length: 264
	I0115 10:00:36.910903   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:36 GMT
	I0115 10:00:36.910911   29671 round_trippers.go:580]     Audit-Id: 7266d817-6d99-49b9-ab7e-21adaf60f8a3
	I0115 10:00:36.910933   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:36.910947   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:36.910956   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:36.911052   29671 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0115 10:00:36.911124   29671 api_server.go:141] control plane version: v1.28.4
	I0115 10:00:36.911141   29671 api_server.go:131] duration metric: took 5.519009444s to wait for apiserver health ...
	I0115 10:00:36.911148   29671 cni.go:84] Creating CNI manager for ""
	I0115 10:00:36.911154   29671 cni.go:136] 3 nodes found, recommending kindnet
	I0115 10:00:36.913119   29671 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 10:00:36.914921   29671 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 10:00:36.920787   29671 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0115 10:00:36.920805   29671 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0115 10:00:36.920814   29671 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0115 10:00:36.920820   29671 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 10:00:36.920828   29671 command_runner.go:130] > Access: 2024-01-15 10:00:04.443236172 +0000
	I0115 10:00:36.920834   29671 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0115 10:00:36.920840   29671 command_runner.go:130] > Change: 2024-01-15 10:00:02.526236172 +0000
	I0115 10:00:36.920847   29671 command_runner.go:130] >  Birth: -
	I0115 10:00:36.920996   29671 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 10:00:36.921007   29671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 10:00:36.938782   29671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 10:00:38.053782   29671 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0115 10:00:38.064843   29671 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0115 10:00:38.068751   29671 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0115 10:00:38.085487   29671 command_runner.go:130] > daemonset.apps/kindnet configured
	I0115 10:00:38.088243   29671 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.149438259s)
	I0115 10:00:38.088270   29671 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:00:38.088377   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 10:00:38.088390   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.088402   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.088416   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.091994   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:38.092009   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.092016   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.092021   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.092029   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.092046   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.092056   29671 round_trippers.go:580]     Audit-Id: 3b291631-877e-4de8-9a33-4b38d83cf383
	I0115 10:00:38.092066   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.093154   29671 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"826"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82886 chars]
	I0115 10:00:38.097431   29671 system_pods.go:59] 12 kube-system pods found
	I0115 10:00:38.097466   29671 system_pods.go:61] "coredns-5dd5756b68-n2sqg" [f303a63a-c959-477e-89d5-c35bd0802b1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:00:38.097476   29671 system_pods.go:61] "etcd-multinode-975382" [6b8601c3-a366-4171-9221-4b83d091aff7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:00:38.097494   29671 system_pods.go:61] "kindnet-7tf97" [3b9e470b-af37-44cd-8402-6ec9b3340058] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0115 10:00:38.097507   29671 system_pods.go:61] "kindnet-pd2q7" [5414de37-d69e-426b-ac29-1827fe0bd753] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0115 10:00:38.097516   29671 system_pods.go:61] "kindnet-q2p7k" [22f0fe0e-fe44-4ba3-b3a8-bbbd7b48a588] Running
	I0115 10:00:38.097525   29671 system_pods.go:61] "kube-apiserver-multinode-975382" [0c174d15-48a9-4394-ba76-207b7cbc42a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:00:38.097534   29671 system_pods.go:61] "kube-controller-manager-multinode-975382" [0fabcc70-f923-40a7-86b4-70c0cc2213ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:00:38.097539   29671 system_pods.go:61] "kube-proxy-fxwtq" [54b5ed4b-d227-46d0-b113-85849b0c0700] Running
	I0115 10:00:38.097549   29671 system_pods.go:61] "kube-proxy-jgsx4" [a779cea9-5532-4d69-9e49-ac2879e028ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:00:38.097556   29671 system_pods.go:61] "kube-proxy-znv78" [bb4d831f-7308-4f44-b944-fdfdf1d583c2] Running
	I0115 10:00:38.097561   29671 system_pods.go:61] "kube-scheduler-multinode-975382" [d7c93aee-4d7c-4264-8d65-de8781105178] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:00:38.097567   29671 system_pods.go:61] "storage-provisioner" [b8eb636d-31de-4a7e-a296-a66493d5a827] Running
	I0115 10:00:38.097620   29671 system_pods.go:74] duration metric: took 9.299196ms to wait for pod list to return data ...
	I0115 10:00:38.097632   29671 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:00:38.097702   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0115 10:00:38.097711   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.097718   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.097724   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.100884   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:38.100906   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.100928   29671 round_trippers.go:580]     Audit-Id: 2f434093-0f26-4906-9d35-d63f6358500b
	I0115 10:00:38.100937   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.100946   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.100953   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.100962   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.100971   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.101309   29671 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"827"},"items":[{"metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16475 chars]
	I0115 10:00:38.102071   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:00:38.102095   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:00:38.102107   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:00:38.102114   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:00:38.102119   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:00:38.102125   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:00:38.102132   29671 node_conditions.go:105] duration metric: took 4.493976ms to run NodePressure ...
	I0115 10:00:38.102156   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:00:38.332925   29671 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0115 10:00:38.332944   29671 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0115 10:00:38.332967   29671 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:00:38.333062   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0115 10:00:38.333073   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.333081   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.333086   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.335880   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:38.335904   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.335914   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.335930   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.335938   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.335945   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.335954   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.335965   29671 round_trippers.go:580]     Audit-Id: 95d4f78b-5cf4-464c-9b32-7c8aef0cd8c9
	I0115 10:00:38.336967   29671 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"829"},"items":[{"metadata":{"name":"etcd-multinode-975382","namespace":"kube-system","uid":"6b8601c3-a366-4171-9221-4b83d091aff7","resourceVersion":"803","creationTimestamp":"2024-01-15T09:50:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.mirror":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.seen":"2024-01-15T09:50:07.549379101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0115 10:00:38.337901   29671 kubeadm.go:787] kubelet initialised
	I0115 10:00:38.337922   29671 kubeadm.go:788] duration metric: took 4.94163ms waiting for restarted kubelet to initialise ...
	I0115 10:00:38.337928   29671 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:00:38.337985   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 10:00:38.337993   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.338000   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.338006   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.341695   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:38.341711   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.341720   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.341728   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.341736   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.341751   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.341761   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.341774   29671 round_trippers.go:580]     Audit-Id: ad10dbdb-7241-456e-966f-b5c6203c7fdc
	I0115 10:00:38.342823   29671 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"829"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82638 chars]
	I0115 10:00:38.345272   29671 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:38.345352   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:38.345362   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.345368   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.345374   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.347225   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:38.347242   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.347248   29671 round_trippers.go:580]     Audit-Id: 18f4f566-50b6-4e62-b025-1cbe642d6726
	I0115 10:00:38.347254   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.347259   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.347267   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.347272   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.347278   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.347405   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:38.347825   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:38.347838   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.347848   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.347857   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.351067   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:38.351091   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.351100   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.351109   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.351117   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.351124   29671 round_trippers.go:580]     Audit-Id: 27b98556-0272-4b26-b586-e580ba9d9495
	I0115 10:00:38.351131   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.351136   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.351285   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:38.351563   29671 pod_ready.go:97] node "multinode-975382" hosting pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:38.351581   29671 pod_ready.go:81] duration metric: took 6.290731ms waiting for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	E0115 10:00:38.351591   29671 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-975382" hosting pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:38.351602   29671 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:38.351649   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-975382
	I0115 10:00:38.351661   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.351667   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.351673   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.354838   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:38.354858   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.354864   29671 round_trippers.go:580]     Audit-Id: a1e8134d-ec89-4968-8549-5eaf8c373a92
	I0115 10:00:38.354869   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.354875   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.354880   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.354891   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.354896   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.356048   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-975382","namespace":"kube-system","uid":"6b8601c3-a366-4171-9221-4b83d091aff7","resourceVersion":"803","creationTimestamp":"2024-01-15T09:50:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.mirror":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.seen":"2024-01-15T09:50:07.549379101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0115 10:00:38.356347   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:38.356360   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.356367   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.356372   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.358278   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:38.358293   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.358299   29671 round_trippers.go:580]     Audit-Id: 05661535-5209-448d-9886-a80c394b0b0f
	I0115 10:00:38.358304   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.358312   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.358320   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.358328   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.358340   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.358523   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:38.358863   29671 pod_ready.go:97] node "multinode-975382" hosting pod "etcd-multinode-975382" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:38.358888   29671 pod_ready.go:81] duration metric: took 7.279767ms waiting for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	E0115 10:00:38.358898   29671 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-975382" hosting pod "etcd-multinode-975382" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:38.358912   29671 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:38.358976   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-975382
	I0115 10:00:38.358992   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.359002   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.359012   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.362286   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:38.362304   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.362314   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.362322   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.362329   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.362338   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.362346   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.362354   29671 round_trippers.go:580]     Audit-Id: 5d186718-79ef-47d7-a3fa-a8246de35c77
	I0115 10:00:38.362530   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-975382","namespace":"kube-system","uid":"0c174d15-48a9-4394-ba76-207b7cbc42a0","resourceVersion":"807","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.mirror":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.seen":"2024-01-15T09:50:16.415736932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0115 10:00:38.362999   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:38.363015   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.363025   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.363033   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.370565   29671 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0115 10:00:38.370585   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.370598   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.370606   29671 round_trippers.go:580]     Audit-Id: e875d19d-f32c-4429-8279-a60e4a67c8c6
	I0115 10:00:38.370616   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.370626   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.370637   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.370648   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.370780   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:38.371110   29671 pod_ready.go:97] node "multinode-975382" hosting pod "kube-apiserver-multinode-975382" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:38.371130   29671 pod_ready.go:81] duration metric: took 12.209012ms waiting for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	E0115 10:00:38.371140   29671 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-975382" hosting pod "kube-apiserver-multinode-975382" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:38.371155   29671 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:38.371215   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-975382
	I0115 10:00:38.371228   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.371238   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.371249   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.374157   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:38.374176   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.374185   29671 round_trippers.go:580]     Audit-Id: 3c29afe7-7d41-4d46-95c5-e8c9b11779dc
	I0115 10:00:38.374193   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.374204   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.374215   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.374226   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.374237   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.374683   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-975382","namespace":"kube-system","uid":"0fabcc70-f923-40a7-86b4-70c0cc2213ce","resourceVersion":"801","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.mirror":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.seen":"2024-01-15T09:50:16.415738247Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0115 10:00:38.489029   29671 request.go:629] Waited for 113.866179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:38.489096   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:38.489101   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.489108   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.489114   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.491765   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:38.491783   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.491793   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.491801   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.491809   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.491821   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.491846   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.491858   29671 round_trippers.go:580]     Audit-Id: d20cc7c4-9345-4c12-a110-588eda083275
	I0115 10:00:38.492161   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:38.492475   29671 pod_ready.go:97] node "multinode-975382" hosting pod "kube-controller-manager-multinode-975382" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:38.492504   29671 pod_ready.go:81] duration metric: took 121.338099ms waiting for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	E0115 10:00:38.492516   29671 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-975382" hosting pod "kube-controller-manager-multinode-975382" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:38.492526   29671 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fxwtq" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:38.689377   29671 request.go:629] Waited for 196.773256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:00:38.689467   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:00:38.689487   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.689498   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.689504   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.692457   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:38.692474   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.692480   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.692486   29671 round_trippers.go:580]     Audit-Id: 895cbc40-faf1-4e8d-b37f-f17be3663918
	I0115 10:00:38.692491   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.692497   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.692508   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.692528   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.692943   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fxwtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"54b5ed4b-d227-46d0-b113-85849b0c0700","resourceVersion":"713","creationTimestamp":"2024-01-15T09:51:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0115 10:00:38.888622   29671 request.go:629] Waited for 195.287993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:00:38.888676   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:00:38.888681   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:38.888688   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:38.888694   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:38.891182   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:38.891198   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:38.891204   29671 round_trippers.go:580]     Audit-Id: c0939897-6296-4672-92f8-5ec33d1587ba
	I0115 10:00:38.891209   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:38.891216   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:38.891225   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:38.891235   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:38.891247   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:38 GMT
	I0115 10:00:38.891377   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m03","uid":"e8425595-976c-4f6f-8ad3-6cb2de7275fd","resourceVersion":"758","creationTimestamp":"2024-01-15T09:52:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_52_41_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:52:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4085 chars]
	I0115 10:00:38.891650   29671 pod_ready.go:92] pod "kube-proxy-fxwtq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:38.891664   29671 pod_ready.go:81] duration metric: took 399.124818ms waiting for pod "kube-proxy-fxwtq" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:38.891671   29671 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:39.088757   29671 request.go:629] Waited for 197.017793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 10:00:39.088837   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 10:00:39.088842   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:39.088850   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:39.088856   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:39.091328   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:39.091344   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:39.091349   29671 round_trippers.go:580]     Audit-Id: bf2f67a2-9c97-4190-ba00-23b0b1904ae6
	I0115 10:00:39.091356   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:39.091364   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:39.091374   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:39.091383   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:39.091394   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:39 GMT
	I0115 10:00:39.091516   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgsx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"a779cea9-5532-4d69-9e49-ac2879e028ec","resourceVersion":"827","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0115 10:00:39.289239   29671 request.go:629] Waited for 197.345069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:39.289315   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:39.289320   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:39.289327   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:39.289333   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:39.291736   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:39.291754   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:39.291761   29671 round_trippers.go:580]     Audit-Id: ec433ee1-1e38-4371-b247-f874a8b6881b
	I0115 10:00:39.291772   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:39.291778   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:39.291787   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:39.291792   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:39.291798   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:39 GMT
	I0115 10:00:39.292307   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:39.292613   29671 pod_ready.go:97] node "multinode-975382" hosting pod "kube-proxy-jgsx4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:39.292630   29671 pod_ready.go:81] duration metric: took 400.953137ms waiting for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	E0115 10:00:39.292637   29671 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-975382" hosting pod "kube-proxy-jgsx4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:39.292644   29671 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:39.488693   29671 request.go:629] Waited for 195.979237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 10:00:39.488745   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 10:00:39.488750   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:39.488758   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:39.488764   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:39.491606   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:39.491622   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:39.491629   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:39.491638   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:39 GMT
	I0115 10:00:39.491647   29671 round_trippers.go:580]     Audit-Id: cf55573f-fa48-445c-97ef-58ceaee1506a
	I0115 10:00:39.491661   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:39.491670   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:39.491681   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:39.491807   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-znv78","generateName":"kube-proxy-","namespace":"kube-system","uid":"bb4d831f-7308-4f44-b944-fdfdf1d583c2","resourceVersion":"507","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0115 10:00:39.688532   29671 request.go:629] Waited for 196.290594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:00:39.688606   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:00:39.688616   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:39.688629   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:39.688643   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:39.692523   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:39.692546   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:39.692554   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:39.692559   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:39.692564   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:39 GMT
	I0115 10:00:39.692569   29671 round_trippers.go:580]     Audit-Id: 00b01e09-7fa5-47cf-be7c-8cf77915373b
	I0115 10:00:39.692577   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:39.692582   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:39.692756   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"745","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_52_41_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0115 10:00:39.693053   29671 pod_ready.go:92] pod "kube-proxy-znv78" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:39.693070   29671 pod_ready.go:81] duration metric: took 400.41984ms waiting for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:39.693083   29671 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:39.889033   29671 request.go:629] Waited for 195.884246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 10:00:39.889105   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 10:00:39.889112   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:39.889123   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:39.889132   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:39.893267   29671 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 10:00:39.893288   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:39.893298   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:39.893306   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:39.893313   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:39.893320   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:39 GMT
	I0115 10:00:39.893328   29671 round_trippers.go:580]     Audit-Id: a1505fc9-8cdd-4baf-b9f5-df9a5d668568
	I0115 10:00:39.893349   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:39.893564   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-975382","namespace":"kube-system","uid":"d7c93aee-4d7c-4264-8d65-de8781105178","resourceVersion":"802","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.mirror":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.seen":"2024-01-15T09:50:16.415739183Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0115 10:00:40.089274   29671 request.go:629] Waited for 195.343614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:40.089380   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:40.089392   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:40.089403   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:40.089411   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:40.092057   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:40.092077   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:40.092087   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:40.092095   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:40.092102   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:40.092111   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:40.092123   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:40 GMT
	I0115 10:00:40.092132   29671 round_trippers.go:580]     Audit-Id: a4ab6cfe-57c8-4258-a247-f1b1bb5d92b5
	I0115 10:00:40.092443   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:40.092734   29671 pod_ready.go:97] node "multinode-975382" hosting pod "kube-scheduler-multinode-975382" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:40.092754   29671 pod_ready.go:81] duration metric: took 399.66249ms waiting for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	E0115 10:00:40.092763   29671 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-975382" hosting pod "kube-scheduler-multinode-975382" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-975382" has status "Ready":"False"
	I0115 10:00:40.092774   29671 pod_ready.go:38] duration metric: took 1.754835998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:00:40.092792   29671 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:00:40.107894   29671 command_runner.go:130] > -16
	I0115 10:00:40.107999   29671 ops.go:34] apiserver oom_adj: -16
	I0115 10:00:40.108016   29671 kubeadm.go:640] restartCluster took 22.675351569s
	I0115 10:00:40.108026   29671 kubeadm.go:406] StartCluster complete in 22.732396018s
	I0115 10:00:40.108047   29671 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:00:40.108149   29671 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:00:40.108757   29671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:00:40.108957   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:00:40.109078   29671 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:00:40.111511   29671 out.go:177] * Enabled addons: 
	I0115 10:00:40.109274   29671 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:00:40.109316   29671 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:00:40.112817   29671 addons.go:505] enable addons completed in 3.747122ms: enabled=[]
	I0115 10:00:40.113069   29671 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 10:00:40.113390   29671 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 10:00:40.113399   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:40.113411   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:40.113422   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:40.116271   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:40.116293   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:40.116302   29671 round_trippers.go:580]     Content-Length: 291
	I0115 10:00:40.116322   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:40 GMT
	I0115 10:00:40.116332   29671 round_trippers.go:580]     Audit-Id: 9d3f26e6-69d5-468e-b0ca-f851b75cc67f
	I0115 10:00:40.116338   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:40.116345   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:40.116356   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:40.116368   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:40.116442   29671 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9b737f2-ab4d-4b14-b6f0-b06c44cfcbb8","resourceVersion":"828","creationTimestamp":"2024-01-15T09:50:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0115 10:00:40.116631   29671 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-975382" context rescaled to 1 replicas
	I0115 10:00:40.116660   29671 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:00:40.119192   29671 out.go:177] * Verifying Kubernetes components...
	I0115 10:00:40.120596   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:00:40.207763   29671 command_runner.go:130] > apiVersion: v1
	I0115 10:00:40.207797   29671 command_runner.go:130] > data:
	I0115 10:00:40.207805   29671 command_runner.go:130] >   Corefile: |
	I0115 10:00:40.207812   29671 command_runner.go:130] >     .:53 {
	I0115 10:00:40.207818   29671 command_runner.go:130] >         log
	I0115 10:00:40.207822   29671 command_runner.go:130] >         errors
	I0115 10:00:40.207826   29671 command_runner.go:130] >         health {
	I0115 10:00:40.207831   29671 command_runner.go:130] >            lameduck 5s
	I0115 10:00:40.207835   29671 command_runner.go:130] >         }
	I0115 10:00:40.207839   29671 command_runner.go:130] >         ready
	I0115 10:00:40.207845   29671 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0115 10:00:40.207852   29671 command_runner.go:130] >            pods insecure
	I0115 10:00:40.207858   29671 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0115 10:00:40.207865   29671 command_runner.go:130] >            ttl 30
	I0115 10:00:40.207871   29671 command_runner.go:130] >         }
	I0115 10:00:40.207882   29671 command_runner.go:130] >         prometheus :9153
	I0115 10:00:40.207892   29671 command_runner.go:130] >         hosts {
	I0115 10:00:40.207904   29671 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0115 10:00:40.207913   29671 command_runner.go:130] >            fallthrough
	I0115 10:00:40.207922   29671 command_runner.go:130] >         }
	I0115 10:00:40.207929   29671 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0115 10:00:40.207934   29671 command_runner.go:130] >            max_concurrent 1000
	I0115 10:00:40.207946   29671 command_runner.go:130] >         }
	I0115 10:00:40.207953   29671 command_runner.go:130] >         cache 30
	I0115 10:00:40.207964   29671 command_runner.go:130] >         loop
	I0115 10:00:40.207976   29671 command_runner.go:130] >         reload
	I0115 10:00:40.207986   29671 command_runner.go:130] >         loadbalance
	I0115 10:00:40.207995   29671 command_runner.go:130] >     }
	I0115 10:00:40.208004   29671 command_runner.go:130] > kind: ConfigMap
	I0115 10:00:40.208013   29671 command_runner.go:130] > metadata:
	I0115 10:00:40.208023   29671 command_runner.go:130] >   creationTimestamp: "2024-01-15T09:50:16Z"
	I0115 10:00:40.208029   29671 command_runner.go:130] >   name: coredns
	I0115 10:00:40.208034   29671 command_runner.go:130] >   namespace: kube-system
	I0115 10:00:40.208038   29671 command_runner.go:130] >   resourceVersion: "395"
	I0115 10:00:40.208049   29671 command_runner.go:130] >   uid: 8494dd8b-c116-469e-9602-9f697bb20e4e
	I0115 10:00:40.210525   29671 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:00:40.210572   29671 node_ready.go:35] waiting up to 6m0s for node "multinode-975382" to be "Ready" ...
	I0115 10:00:40.288925   29671 request.go:629] Waited for 78.237005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:40.289007   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:40.289014   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:40.289021   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:40.289027   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:40.291596   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:40.291614   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:40.291621   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:40.291626   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:40.291631   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:40.291639   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:40.291647   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:40 GMT
	I0115 10:00:40.291658   29671 round_trippers.go:580]     Audit-Id: 495d8c36-98f6-45b7-a282-6ddaaa3bc90c
	I0115 10:00:40.291981   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:40.711684   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:40.711719   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:40.711730   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:40.711740   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:40.714058   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:40.714080   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:40.714087   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:40.714092   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:40.714097   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:40.714102   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:40 GMT
	I0115 10:00:40.714107   29671 round_trippers.go:580]     Audit-Id: 9c7d7400-e59a-4da7-bd01-c7168ecfade6
	I0115 10:00:40.714115   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:40.714349   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:41.210992   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:41.211021   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:41.211034   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:41.211044   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:41.213636   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:41.213660   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:41.213674   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:41.213683   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:41.213698   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:41 GMT
	I0115 10:00:41.213707   29671 round_trippers.go:580]     Audit-Id: 575bce21-2faa-44f0-b079-e93a2686dd74
	I0115 10:00:41.213716   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:41.213725   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:41.213924   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:41.711092   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:41.711118   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:41.711129   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:41.711136   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:41.714526   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:41.714552   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:41.714571   29671 round_trippers.go:580]     Audit-Id: 51e09a72-ec94-4b00-ad6a-168d76db7252
	I0115 10:00:41.714580   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:41.714589   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:41.714598   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:41.714610   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:41.714618   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:41 GMT
	I0115 10:00:41.715178   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:42.211661   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:42.211683   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:42.211691   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:42.211697   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:42.215058   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:42.215080   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:42.215090   29671 round_trippers.go:580]     Audit-Id: 40306424-c990-43dc-b1b1-c7d8f27ff9c4
	I0115 10:00:42.215098   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:42.215106   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:42.215114   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:42.215122   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:42.215136   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:42 GMT
	I0115 10:00:42.215422   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:42.215788   29671 node_ready.go:58] node "multinode-975382" has status "Ready":"False"
	I0115 10:00:42.711047   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:42.711066   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:42.711074   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:42.711080   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:42.713666   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:42.713692   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:42.713703   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:42.713713   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:42.713721   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:42.713730   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:42 GMT
	I0115 10:00:42.713739   29671 round_trippers.go:580]     Audit-Id: d23aacaa-1db3-4caa-abae-e2c75c1682fc
	I0115 10:00:42.713749   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:42.713983   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:43.211742   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:43.211773   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:43.211788   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:43.211796   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:43.214315   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:43.214333   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:43.214340   29671 round_trippers.go:580]     Audit-Id: de43d6de-14d9-4c6f-b549-2dcd004d0965
	I0115 10:00:43.214345   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:43.214350   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:43.214356   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:43.214361   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:43.214366   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:43 GMT
	I0115 10:00:43.214948   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:43.711731   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:43.711754   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:43.711762   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:43.711768   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:43.714533   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:43.714552   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:43.714559   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:43.714565   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:43.714570   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:43 GMT
	I0115 10:00:43.714575   29671 round_trippers.go:580]     Audit-Id: e6f6489c-cd02-460d-888e-be5a6e8088ec
	I0115 10:00:43.714580   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:43.714597   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:43.714955   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:44.211687   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:44.211710   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:44.211718   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:44.211724   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:44.213915   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:44.213936   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:44.213947   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:44.213956   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:44.213964   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:44.213973   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:44 GMT
	I0115 10:00:44.213981   29671 round_trippers.go:580]     Audit-Id: 2bc56b65-20d9-4b9f-9c2b-e9d7406628f7
	I0115 10:00:44.213989   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:44.214251   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:44.710867   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:44.710905   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:44.710912   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:44.710918   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:44.714353   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:44.714381   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:44.714394   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:44.714400   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:44.714405   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:44.714410   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:44.714433   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:44 GMT
	I0115 10:00:44.714446   29671 round_trippers.go:580]     Audit-Id: 4ff7a02c-03ae-4ead-9957-473cdf94d363
	I0115 10:00:44.714924   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:44.715229   29671 node_ready.go:58] node "multinode-975382" has status "Ready":"False"
	I0115 10:00:45.211658   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:45.211680   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:45.211688   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:45.211694   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:45.214125   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:45.214141   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:45.214150   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:45 GMT
	I0115 10:00:45.214158   29671 round_trippers.go:580]     Audit-Id: 5b73f99d-5cc6-4caf-b299-f2fe86da6ca7
	I0115 10:00:45.214165   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:45.214173   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:45.214198   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:45.214207   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:45.214441   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"736","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0115 10:00:45.711041   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:45.711065   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:45.711073   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:45.711079   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:45.714248   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:45.714278   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:45.714290   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:45.714298   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:45.714329   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:45.714341   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:45 GMT
	I0115 10:00:45.714351   29671 round_trippers.go:580]     Audit-Id: 69796024-c91f-494d-8e26-5306146379bf
	I0115 10:00:45.714364   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:45.714612   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:45.715045   29671 node_ready.go:49] node "multinode-975382" has status "Ready":"True"
	I0115 10:00:45.715068   29671 node_ready.go:38] duration metric: took 5.504473643s waiting for node "multinode-975382" to be "Ready" ...
	I0115 10:00:45.715079   29671 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:00:45.715167   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 10:00:45.715181   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:45.715192   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:45.715204   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:45.718597   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:45.718614   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:45.718622   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:45 GMT
	I0115 10:00:45.718632   29671 round_trippers.go:580]     Audit-Id: e3c7fa92-a1c6-4c7c-9c65-3ff813782647
	I0115 10:00:45.718641   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:45.718651   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:45.718661   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:45.718672   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:45.720225   29671 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"870"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82733 chars]
	I0115 10:00:45.722900   29671 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:45.722962   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:45.722972   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:45.722979   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:45.722992   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:45.725375   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:45.725390   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:45.725396   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:45.725401   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:45.725406   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:45 GMT
	I0115 10:00:45.725411   29671 round_trippers.go:580]     Audit-Id: 54b99b01-9233-4cc1-969b-6f7655d964af
	I0115 10:00:45.725415   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:45.725423   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:45.725707   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:45.726195   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:45.726217   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:45.726229   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:45.726240   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:45.728269   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:45.728288   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:45.728298   29671 round_trippers.go:580]     Audit-Id: c6ae45b1-306e-40b6-8871-d5902a188254
	I0115 10:00:45.728306   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:45.728314   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:45.728321   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:45.728329   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:45.728341   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:45 GMT
	I0115 10:00:45.728528   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:46.223178   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:46.223207   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:46.223220   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:46.223229   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:46.225760   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:46.225778   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:46.225785   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:46.225790   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:46.225798   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:46.225804   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:46 GMT
	I0115 10:00:46.225809   29671 round_trippers.go:580]     Audit-Id: d0227680-25e1-4e50-9040-a30f3bcd620a
	I0115 10:00:46.225814   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:46.225995   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:46.226516   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:46.226530   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:46.226537   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:46.226543   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:46.229091   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:46.229112   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:46.229121   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:46.229129   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:46 GMT
	I0115 10:00:46.229137   29671 round_trippers.go:580]     Audit-Id: 684fe700-6bf9-4925-b1d3-0e8651ceb3ea
	I0115 10:00:46.229145   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:46.229158   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:46.229168   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:46.229557   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:46.723386   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:46.723412   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:46.723422   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:46.723431   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:46.726307   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:46.726325   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:46.726377   29671 round_trippers.go:580]     Audit-Id: fd8c1cc7-9caa-4a2b-a19f-a645011d6b6a
	I0115 10:00:46.726391   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:46.726397   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:46.726406   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:46.726439   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:46.726451   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:46 GMT
	I0115 10:00:46.726760   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:46.727200   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:46.727223   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:46.727233   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:46.727242   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:46.730667   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:46.730683   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:46.730689   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:46.730695   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:46.730699   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:46.730704   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:46.730710   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:46 GMT
	I0115 10:00:46.730717   29671 round_trippers.go:580]     Audit-Id: 0209c1a5-111e-4db2-8c19-1a16394a02a7
	I0115 10:00:46.731033   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:47.223728   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:47.223754   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:47.223761   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:47.223767   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:47.226761   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:47.226781   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:47.226793   29671 round_trippers.go:580]     Audit-Id: 5adc3d61-d37b-4d27-a46e-a669e88ef443
	I0115 10:00:47.226802   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:47.226809   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:47.226820   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:47.226833   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:47.226846   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:47 GMT
	I0115 10:00:47.227215   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:47.227671   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:47.227686   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:47.227693   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:47.227700   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:47.230684   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:47.230702   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:47.230711   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:47 GMT
	I0115 10:00:47.230719   29671 round_trippers.go:580]     Audit-Id: cf93e718-193d-4d6d-8856-e7bde9575448
	I0115 10:00:47.230728   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:47.230743   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:47.230753   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:47.230761   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:47.231144   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:47.723949   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:47.723989   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:47.724002   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:47.724012   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:47.727559   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:47.727577   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:47.727584   29671 round_trippers.go:580]     Audit-Id: 8ee27204-cd44-4ef7-8a6b-505a56bd73df
	I0115 10:00:47.727590   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:47.727595   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:47.727612   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:47.727617   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:47.727622   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:47 GMT
	I0115 10:00:47.728641   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:47.729125   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:47.729137   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:47.729144   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:47.729149   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:47.738062   29671 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0115 10:00:47.738080   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:47.738087   29671 round_trippers.go:580]     Audit-Id: d5193e24-139a-406d-9ab8-a3b3ff7c5230
	I0115 10:00:47.738092   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:47.738097   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:47.738102   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:47.738107   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:47.738112   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:47 GMT
	I0115 10:00:47.738761   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:47.739042   29671 pod_ready.go:102] pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace has status "Ready":"False"
	I0115 10:00:48.223348   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:48.223383   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:48.223391   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:48.223399   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:48.225529   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:48.225544   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:48.225551   29671 round_trippers.go:580]     Audit-Id: 2b9c0ff8-b9f9-4e21-ad3a-b757841eef0b
	I0115 10:00:48.225556   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:48.225561   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:48.225566   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:48.225571   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:48.225576   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:48 GMT
	I0115 10:00:48.226100   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:48.226516   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:48.226530   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:48.226537   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:48.226545   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:48.228460   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:48.228476   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:48.228482   29671 round_trippers.go:580]     Audit-Id: 41b2dc38-997d-44da-bdf7-729600afdcd9
	I0115 10:00:48.228487   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:48.228492   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:48.228498   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:48.228503   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:48.228509   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:48 GMT
	I0115 10:00:48.228646   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:48.723275   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:48.723296   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:48.723304   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:48.723309   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:48.726898   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:48.726912   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:48.726923   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:48.726931   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:48.726939   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:48 GMT
	I0115 10:00:48.726952   29671 round_trippers.go:580]     Audit-Id: f0567158-0c3e-44f4-9c95-0af974822f8a
	I0115 10:00:48.726965   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:48.726973   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:48.727617   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:48.728070   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:48.728086   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:48.728096   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:48.728104   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:48.730633   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:48.730651   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:48.730660   29671 round_trippers.go:580]     Audit-Id: 162b53fb-49e7-46aa-ba5d-6e0697aa3355
	I0115 10:00:48.730669   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:48.730678   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:48.730688   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:48.730697   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:48.730702   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:48 GMT
	I0115 10:00:48.730866   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:49.223507   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:49.223530   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:49.223538   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:49.223544   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:49.226230   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:49.226243   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:49.226249   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:49 GMT
	I0115 10:00:49.226254   29671 round_trippers.go:580]     Audit-Id: 963b9ef2-328a-492b-9e9d-5975217f5409
	I0115 10:00:49.226259   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:49.226265   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:49.226269   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:49.226275   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:49.226927   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:49.227334   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:49.227348   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:49.227355   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:49.227361   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:49.229327   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:49.229347   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:49.229357   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:49.229375   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:49.229383   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:49.229395   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:49.229406   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:49 GMT
	I0115 10:00:49.229417   29671 round_trippers.go:580]     Audit-Id: 1a578359-f7e6-48c2-8a65-dc93e8f7a605
	I0115 10:00:49.229911   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:49.723511   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:49.723543   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:49.723552   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:49.723561   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:49.726466   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:49.726485   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:49.726491   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:49.726497   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:49 GMT
	I0115 10:00:49.726502   29671 round_trippers.go:580]     Audit-Id: 6f6840b1-4c2c-4097-84ee-1752a5b10e04
	I0115 10:00:49.726507   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:49.726512   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:49.726520   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:49.726692   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:49.727264   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:49.727281   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:49.727294   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:49.727304   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:49.731035   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:49.731048   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:49.731054   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:49 GMT
	I0115 10:00:49.731059   29671 round_trippers.go:580]     Audit-Id: 91afd250-bd50-4485-8a2a-be5c41433b4e
	I0115 10:00:49.731064   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:49.731069   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:49.731074   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:49.731079   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:49.732019   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:50.223659   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:50.223686   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:50.223722   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:50.223734   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:50.226079   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:50.226106   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:50.226115   29671 round_trippers.go:580]     Audit-Id: c7d69649-a4c7-4c25-b5d7-124d119b6ded
	I0115 10:00:50.226124   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:50.226132   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:50.226141   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:50.226148   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:50.226156   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:50 GMT
	I0115 10:00:50.226483   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:50.227064   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:50.227088   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:50.227099   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:50.227109   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:50.229187   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:50.229210   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:50.229220   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:50.229228   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:50.229235   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:50.229243   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:50.229250   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:50 GMT
	I0115 10:00:50.229261   29671 round_trippers.go:580]     Audit-Id: cc5be205-7188-48aa-841c-917403152505
	I0115 10:00:50.229380   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:50.229779   29671 pod_ready.go:102] pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace has status "Ready":"False"
	I0115 10:00:50.724137   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:50.724158   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:50.724166   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:50.724172   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:50.727143   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:50.727167   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:50.727180   29671 round_trippers.go:580]     Audit-Id: 10a5a8e4-7449-463c-bb59-789ac017a854
	I0115 10:00:50.727188   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:50.727197   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:50.727203   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:50.727211   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:50.727219   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:50 GMT
	I0115 10:00:50.727602   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:50.728210   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:50.728228   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:50.728239   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:50.728246   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:50.730254   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:50.730270   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:50.730279   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:50.730286   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:50.730295   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:50 GMT
	I0115 10:00:50.730307   29671 round_trippers.go:580]     Audit-Id: 7c0abe88-4caa-43ae-96b5-9932a719e742
	I0115 10:00:50.730315   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:50.730329   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:50.730652   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:51.223272   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:51.223295   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:51.223303   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:51.223309   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:51.226041   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:51.226061   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:51.226068   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:51.226077   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:51 GMT
	I0115 10:00:51.226084   29671 round_trippers.go:580]     Audit-Id: 5265a38e-91da-4355-88bb-3e1866a0102a
	I0115 10:00:51.226093   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:51.226099   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:51.226107   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:51.226307   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:51.226751   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:51.226764   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:51.226770   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:51.226776   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:51.229426   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:51.229443   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:51.229462   29671 round_trippers.go:580]     Audit-Id: d5304c6a-da58-4655-9f70-0e6ac39a513d
	I0115 10:00:51.229471   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:51.229479   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:51.229486   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:51.229498   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:51.229511   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:51 GMT
	I0115 10:00:51.229671   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:51.723667   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:51.723689   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:51.723697   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:51.723703   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:51.726508   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:51.726534   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:51.726548   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:51.726556   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:51.726563   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:51 GMT
	I0115 10:00:51.726580   29671 round_trippers.go:580]     Audit-Id: ab569c6c-9083-45bb-8d73-b62861d77539
	I0115 10:00:51.726593   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:51.726600   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:51.727092   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:51.727693   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:51.727712   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:51.727721   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:51.727729   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:51.729964   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:51.729980   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:51.729987   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:51.729992   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:51.729997   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:51.730010   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:51 GMT
	I0115 10:00:51.730016   29671 round_trippers.go:580]     Audit-Id: 1480accc-da0e-4056-a50f-602ce51133cc
	I0115 10:00:51.730020   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:51.730185   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:52.223692   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:52.223723   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:52.223735   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:52.223748   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:52.226284   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:52.226306   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:52.226316   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:52.226324   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:52 GMT
	I0115 10:00:52.226332   29671 round_trippers.go:580]     Audit-Id: 2e7546ee-1876-44bf-a368-8b3238e34a1b
	I0115 10:00:52.226343   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:52.226354   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:52.226365   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:52.226687   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:52.227318   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:52.227339   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:52.227356   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:52.227371   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:52.230152   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:52.230169   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:52.230184   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:52 GMT
	I0115 10:00:52.230192   29671 round_trippers.go:580]     Audit-Id: c9f2a16c-fba2-4e32-9a55-902bfe7e14c3
	I0115 10:00:52.230200   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:52.230209   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:52.230219   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:52.230227   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:52.230793   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:52.231080   29671 pod_ready.go:102] pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace has status "Ready":"False"
	I0115 10:00:52.723443   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:52.723468   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:52.723480   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:52.723489   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:52.729138   29671 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 10:00:52.729158   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:52.729165   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:52 GMT
	I0115 10:00:52.729170   29671 round_trippers.go:580]     Audit-Id: 96649075-a898-4634-994e-a57816dbc612
	I0115 10:00:52.729175   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:52.729180   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:52.729188   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:52.729193   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:52.729923   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"806","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0115 10:00:52.730354   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:52.730366   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:52.730373   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:52.730378   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:52.733897   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:52.733918   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:52.733929   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:52.733938   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:52 GMT
	I0115 10:00:52.733962   29671 round_trippers.go:580]     Audit-Id: e717ea72-0c79-4c68-960a-c7339887d4cc
	I0115 10:00:52.733971   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:52.733976   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:52.733981   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:52.734252   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:53.223662   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:00:53.223684   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.223692   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.223698   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.227200   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:53.227219   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.227235   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.227240   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.227246   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.227251   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.227256   29671 round_trippers.go:580]     Audit-Id: bcc09cb5-c60a-4243-90f7-d5b266437f2e
	I0115 10:00:53.227261   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.227671   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"897","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0115 10:00:53.228104   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:53.228118   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.228125   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.228131   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.230504   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:53.230524   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.230535   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.230544   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.230552   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.230562   29671 round_trippers.go:580]     Audit-Id: 5d00f113-6f77-4d2d-a642-3190cd9336ab
	I0115 10:00:53.230568   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.230575   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.230783   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:53.231176   29671 pod_ready.go:92] pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:53.231196   29671 pod_ready.go:81] duration metric: took 7.508273261s waiting for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.231205   29671 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.231265   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-975382
	I0115 10:00:53.231274   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.231281   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.231286   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.233295   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:53.233311   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.233321   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.233337   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.233346   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.233352   29671 round_trippers.go:580]     Audit-Id: 7c03a03c-e906-4a97-b333-eead8646719c
	I0115 10:00:53.233358   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.233364   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.233571   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-975382","namespace":"kube-system","uid":"6b8601c3-a366-4171-9221-4b83d091aff7","resourceVersion":"865","creationTimestamp":"2024-01-15T09:50:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.mirror":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.seen":"2024-01-15T09:50:07.549379101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0115 10:00:53.233893   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:53.233905   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.233913   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.233920   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.235605   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:53.235624   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.235633   29671 round_trippers.go:580]     Audit-Id: 2b6f5322-b020-4338-b4ee-b0fb1c494cb1
	I0115 10:00:53.235641   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.235648   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.235656   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.235667   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.235674   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.235934   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:53.236317   29671 pod_ready.go:92] pod "etcd-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:53.236338   29671 pod_ready.go:81] duration metric: took 5.125723ms waiting for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.236359   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.236421   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-975382
	I0115 10:00:53.236432   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.236442   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.236453   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.239025   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:53.239042   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.239052   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.239061   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.239070   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.239084   29671 round_trippers.go:580]     Audit-Id: 06c4da48-6c1c-471a-8203-50724d14d4a3
	I0115 10:00:53.239093   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.239098   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.239285   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-975382","namespace":"kube-system","uid":"0c174d15-48a9-4394-ba76-207b7cbc42a0","resourceVersion":"873","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.mirror":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.seen":"2024-01-15T09:50:16.415736932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0115 10:00:53.239666   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:53.239679   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.239686   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.239691   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.241394   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:53.241405   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.241411   29671 round_trippers.go:580]     Audit-Id: 747c3961-6995-4ddb-839a-223861baee76
	I0115 10:00:53.241416   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.241421   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.241425   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.241430   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.241437   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.241619   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:53.241896   29671 pod_ready.go:92] pod "kube-apiserver-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:53.241909   29671 pod_ready.go:81] duration metric: took 5.539706ms waiting for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.241917   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.241997   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-975382
	I0115 10:00:53.242010   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.242019   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.242028   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.244050   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:53.244062   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.244068   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.244073   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.244078   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.244083   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.244088   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.244097   29671 round_trippers.go:580]     Audit-Id: 7d36d797-ac42-41d0-b7ef-85cfe5514a33
	I0115 10:00:53.245708   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-975382","namespace":"kube-system","uid":"0fabcc70-f923-40a7-86b4-70c0cc2213ce","resourceVersion":"887","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.mirror":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.seen":"2024-01-15T09:50:16.415738247Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0115 10:00:53.246044   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:53.246056   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.246063   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.246068   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.250304   29671 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 10:00:53.250321   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.250328   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.250334   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.250339   29671 round_trippers.go:580]     Audit-Id: 97720856-66a0-4a69-a28c-2676e8d03c74
	I0115 10:00:53.250344   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.250349   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.250356   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.250503   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:53.250876   29671 pod_ready.go:92] pod "kube-controller-manager-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:53.250897   29671 pod_ready.go:81] duration metric: took 8.964245ms waiting for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.250906   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fxwtq" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.250966   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:00:53.250975   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.250982   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.250990   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.252804   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:53.252817   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.252823   29671 round_trippers.go:580]     Audit-Id: 7d48dc70-16e5-4c55-9b7e-e97096986e6a
	I0115 10:00:53.252828   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.252833   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.252838   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.252843   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.252848   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.253278   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fxwtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"54b5ed4b-d227-46d0-b113-85849b0c0700","resourceVersion":"713","creationTimestamp":"2024-01-15T09:51:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0115 10:00:53.253728   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:00:53.253745   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.253751   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.253757   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.255581   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:53.255594   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.255600   29671 round_trippers.go:580]     Audit-Id: 8e17e920-363f-41fc-922c-a6d385d2eae6
	I0115 10:00:53.255605   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.255610   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.255617   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.255624   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.255633   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.255784   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m03","uid":"e8425595-976c-4f6f-8ad3-6cb2de7275fd","resourceVersion":"881","creationTimestamp":"2024-01-15T09:52:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_52_41_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:52:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0115 10:00:53.256093   29671 pod_ready.go:92] pod "kube-proxy-fxwtq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:53.256113   29671 pod_ready.go:81] duration metric: took 5.201105ms waiting for pod "kube-proxy-fxwtq" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.256129   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.424429   29671 request.go:629] Waited for 168.248705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 10:00:53.424496   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 10:00:53.424504   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.424512   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.424521   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.428572   29671 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 10:00:53.428600   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.428607   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.428612   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.428618   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.428623   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.428628   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.428633   29671 round_trippers.go:580]     Audit-Id: 57c79c72-61e8-432c-adde-fc55cc1770bd
	I0115 10:00:53.429981   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgsx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"a779cea9-5532-4d69-9e49-ac2879e028ec","resourceVersion":"827","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0115 10:00:53.623790   29671 request.go:629] Waited for 193.385535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:53.623852   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:53.623857   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.623865   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.623874   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.626604   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:53.626622   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.626630   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.626638   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.626646   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.626654   29671 round_trippers.go:580]     Audit-Id: 09ba5edb-4f52-4498-bbc4-c844336db27e
	I0115 10:00:53.626665   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.626678   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.627126   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:53.627431   29671 pod_ready.go:92] pod "kube-proxy-jgsx4" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:53.627447   29671 pod_ready.go:81] duration metric: took 371.309552ms waiting for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.627459   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:53.823795   29671 request.go:629] Waited for 196.259053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 10:00:53.823885   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 10:00:53.823896   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:53.823908   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:53.823922   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:53.826569   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:53.826595   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:53.826603   29671 round_trippers.go:580]     Audit-Id: 1ee73a49-2e25-4f4e-a1f9-7e682b96b221
	I0115 10:00:53.826611   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:53.826620   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:53.826628   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:53.826636   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:53.826644   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:53 GMT
	I0115 10:00:53.826879   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-znv78","generateName":"kube-proxy-","namespace":"kube-system","uid":"bb4d831f-7308-4f44-b944-fdfdf1d583c2","resourceVersion":"507","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0115 10:00:54.024734   29671 request.go:629] Waited for 197.420393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:00:54.024808   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:00:54.024814   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:54.024822   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:54.024828   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:54.027676   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:54.027693   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:54.027706   29671 round_trippers.go:580]     Audit-Id: 69f0b4b8-63d2-46f0-a223-1f366b8c7279
	I0115 10:00:54.027717   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:54.027733   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:54.027741   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:54.027751   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:54.027762   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:54 GMT
	I0115 10:00:54.027842   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7","resourceVersion":"745","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_52_41_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0115 10:00:54.028179   29671 pod_ready.go:92] pod "kube-proxy-znv78" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:54.028195   29671 pod_ready.go:81] duration metric: took 400.728662ms waiting for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:54.028209   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:54.224247   29671 request.go:629] Waited for 195.968377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 10:00:54.224341   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 10:00:54.224351   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:54.224358   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:54.224366   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:54.227590   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:54.227611   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:54.227617   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:54.227623   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:54 GMT
	I0115 10:00:54.227628   29671 round_trippers.go:580]     Audit-Id: afed64d7-0280-4774-967d-8cb14a629293
	I0115 10:00:54.227633   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:54.227641   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:54.227647   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:54.227786   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-975382","namespace":"kube-system","uid":"d7c93aee-4d7c-4264-8d65-de8781105178","resourceVersion":"889","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.mirror":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.seen":"2024-01-15T09:50:16.415739183Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0115 10:00:54.424512   29671 request.go:629] Waited for 196.331565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:54.424566   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:00:54.424571   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:54.424578   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:54.424584   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:54.427585   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:00:54.427602   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:54.427611   29671 round_trippers.go:580]     Audit-Id: 33bb52d6-aada-427d-bae7-6c5f0d99ba33
	I0115 10:00:54.427619   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:54.427635   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:54.427644   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:54.427654   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:54.427662   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:54 GMT
	I0115 10:00:54.427832   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0115 10:00:54.428185   29671 pod_ready.go:92] pod "kube-scheduler-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:00:54.428203   29671 pod_ready.go:81] duration metric: took 399.982855ms waiting for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:00:54.428217   29671 pod_ready.go:38] duration metric: took 8.713126407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:00:54.428235   29671 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:00:54.428297   29671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:00:54.443564   29671 command_runner.go:130] > 1103
	I0115 10:00:54.443739   29671 api_server.go:72] duration metric: took 14.327045042s to wait for apiserver process to appear ...
	I0115 10:00:54.443756   29671 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:00:54.443775   29671 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0115 10:00:54.448750   29671 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0115 10:00:54.448800   29671 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0115 10:00:54.448807   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:54.448814   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:54.448822   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:54.449978   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:00:54.449993   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:54.450002   29671 round_trippers.go:580]     Content-Length: 264
	I0115 10:00:54.450011   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:54 GMT
	I0115 10:00:54.450019   29671 round_trippers.go:580]     Audit-Id: be7aef6b-894c-4785-9e28-d911905adbdd
	I0115 10:00:54.450032   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:54.450042   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:54.450054   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:54.450066   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:54.450085   29671 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0115 10:00:54.450129   29671 api_server.go:141] control plane version: v1.28.4
	I0115 10:00:54.450146   29671 api_server.go:131] duration metric: took 6.383487ms to wait for apiserver health ...
	I0115 10:00:54.450155   29671 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:00:54.624555   29671 request.go:629] Waited for 174.333481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 10:00:54.624620   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 10:00:54.624627   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:54.624635   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:54.624644   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:54.629236   29671 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 10:00:54.629260   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:54.629270   29671 round_trippers.go:580]     Audit-Id: 8a5f91fe-78c6-4cc1-a08b-38f25c62d7d5
	I0115 10:00:54.629278   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:54.629286   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:54.629295   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:54.629304   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:54.629314   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:54 GMT
	I0115 10:00:54.630959   29671 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"907"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"897","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0115 10:00:54.633290   29671 system_pods.go:59] 12 kube-system pods found
	I0115 10:00:54.633308   29671 system_pods.go:61] "coredns-5dd5756b68-n2sqg" [f303a63a-c959-477e-89d5-c35bd0802b1b] Running
	I0115 10:00:54.633312   29671 system_pods.go:61] "etcd-multinode-975382" [6b8601c3-a366-4171-9221-4b83d091aff7] Running
	I0115 10:00:54.633319   29671 system_pods.go:61] "kindnet-7tf97" [3b9e470b-af37-44cd-8402-6ec9b3340058] Running
	I0115 10:00:54.633328   29671 system_pods.go:61] "kindnet-pd2q7" [5414de37-d69e-426b-ac29-1827fe0bd753] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0115 10:00:54.633338   29671 system_pods.go:61] "kindnet-q2p7k" [22f0fe0e-fe44-4ba3-b3a8-bbbd7b48a588] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0115 10:00:54.633346   29671 system_pods.go:61] "kube-apiserver-multinode-975382" [0c174d15-48a9-4394-ba76-207b7cbc42a0] Running
	I0115 10:00:54.633354   29671 system_pods.go:61] "kube-controller-manager-multinode-975382" [0fabcc70-f923-40a7-86b4-70c0cc2213ce] Running
	I0115 10:00:54.633364   29671 system_pods.go:61] "kube-proxy-fxwtq" [54b5ed4b-d227-46d0-b113-85849b0c0700] Running
	I0115 10:00:54.633371   29671 system_pods.go:61] "kube-proxy-jgsx4" [a779cea9-5532-4d69-9e49-ac2879e028ec] Running
	I0115 10:00:54.633379   29671 system_pods.go:61] "kube-proxy-znv78" [bb4d831f-7308-4f44-b944-fdfdf1d583c2] Running
	I0115 10:00:54.633384   29671 system_pods.go:61] "kube-scheduler-multinode-975382" [d7c93aee-4d7c-4264-8d65-de8781105178] Running
	I0115 10:00:54.633388   29671 system_pods.go:61] "storage-provisioner" [b8eb636d-31de-4a7e-a296-a66493d5a827] Running
	I0115 10:00:54.633394   29671 system_pods.go:74] duration metric: took 183.230489ms to wait for pod list to return data ...
	I0115 10:00:54.633402   29671 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:00:54.823777   29671 request.go:629] Waited for 190.300828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0115 10:00:54.823844   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0115 10:00:54.823851   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:54.823859   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:54.823867   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:54.826941   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:54.826960   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:54.826967   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:54.826972   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:54.826978   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:54.826987   29671 round_trippers.go:580]     Content-Length: 261
	I0115 10:00:54.826993   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:54 GMT
	I0115 10:00:54.826998   29671 round_trippers.go:580]     Audit-Id: 046f845b-310a-4d8c-84c1-8ef9547b5c7d
	I0115 10:00:54.827003   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:54.827024   29671 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"907"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"bb2aa1f7-da8f-4785-82a8-74ac34272521","resourceVersion":"360","creationTimestamp":"2024-01-15T09:50:28Z"}}]}
	I0115 10:00:54.827190   29671 default_sa.go:45] found service account: "default"
	I0115 10:00:54.827205   29671 default_sa.go:55] duration metric: took 193.798226ms for default service account to be created ...
	I0115 10:00:54.827213   29671 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:00:55.024651   29671 request.go:629] Waited for 197.376483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 10:00:55.024741   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 10:00:55.024755   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:55.024764   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:55.024775   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:55.031828   29671 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0115 10:00:55.031847   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:55.031854   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:55.031862   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:55 GMT
	I0115 10:00:55.031867   29671 round_trippers.go:580]     Audit-Id: 82e881ca-f154-4de9-af64-00bee62d5024
	I0115 10:00:55.031873   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:55.031878   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:55.031883   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:55.033410   29671 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"907"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"897","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0115 10:00:55.035812   29671 system_pods.go:86] 12 kube-system pods found
	I0115 10:00:55.035836   29671 system_pods.go:89] "coredns-5dd5756b68-n2sqg" [f303a63a-c959-477e-89d5-c35bd0802b1b] Running
	I0115 10:00:55.035841   29671 system_pods.go:89] "etcd-multinode-975382" [6b8601c3-a366-4171-9221-4b83d091aff7] Running
	I0115 10:00:55.035845   29671 system_pods.go:89] "kindnet-7tf97" [3b9e470b-af37-44cd-8402-6ec9b3340058] Running
	I0115 10:00:55.035852   29671 system_pods.go:89] "kindnet-pd2q7" [5414de37-d69e-426b-ac29-1827fe0bd753] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0115 10:00:55.035859   29671 system_pods.go:89] "kindnet-q2p7k" [22f0fe0e-fe44-4ba3-b3a8-bbbd7b48a588] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0115 10:00:55.035864   29671 system_pods.go:89] "kube-apiserver-multinode-975382" [0c174d15-48a9-4394-ba76-207b7cbc42a0] Running
	I0115 10:00:55.035869   29671 system_pods.go:89] "kube-controller-manager-multinode-975382" [0fabcc70-f923-40a7-86b4-70c0cc2213ce] Running
	I0115 10:00:55.035873   29671 system_pods.go:89] "kube-proxy-fxwtq" [54b5ed4b-d227-46d0-b113-85849b0c0700] Running
	I0115 10:00:55.035877   29671 system_pods.go:89] "kube-proxy-jgsx4" [a779cea9-5532-4d69-9e49-ac2879e028ec] Running
	I0115 10:00:55.035880   29671 system_pods.go:89] "kube-proxy-znv78" [bb4d831f-7308-4f44-b944-fdfdf1d583c2] Running
	I0115 10:00:55.035884   29671 system_pods.go:89] "kube-scheduler-multinode-975382" [d7c93aee-4d7c-4264-8d65-de8781105178] Running
	I0115 10:00:55.035889   29671 system_pods.go:89] "storage-provisioner" [b8eb636d-31de-4a7e-a296-a66493d5a827] Running
	I0115 10:00:55.035897   29671 system_pods.go:126] duration metric: took 208.680581ms to wait for k8s-apps to be running ...
	I0115 10:00:55.035904   29671 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:00:55.035950   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:00:55.054258   29671 system_svc.go:56] duration metric: took 18.3492ms WaitForService to wait for kubelet.
	I0115 10:00:55.054282   29671 kubeadm.go:581] duration metric: took 14.937591054s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:00:55.054304   29671 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:00:55.224702   29671 request.go:629] Waited for 170.333317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0115 10:00:55.224763   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0115 10:00:55.224768   29671 round_trippers.go:469] Request Headers:
	I0115 10:00:55.224775   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:00:55.224781   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:00:55.228299   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:00:55.228321   29671 round_trippers.go:577] Response Headers:
	I0115 10:00:55.228331   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:00:55.228340   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:00:55 GMT
	I0115 10:00:55.228349   29671 round_trippers.go:580]     Audit-Id: 1477dac1-a341-4861-9c24-0706048e852b
	I0115 10:00:55.228357   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:00:55.228367   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:00:55.228379   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:00:55.228653   29671 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"907"},"items":[{"metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"866","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I0115 10:00:55.229260   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:00:55.229281   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:00:55.229294   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:00:55.229301   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:00:55.229306   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:00:55.229314   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:00:55.229323   29671 node_conditions.go:105] duration metric: took 175.013093ms to run NodePressure ...
	I0115 10:00:55.229338   29671 start.go:228] waiting for startup goroutines ...
	I0115 10:00:55.229351   29671 start.go:233] waiting for cluster config update ...
	I0115 10:00:55.229362   29671 start.go:242] writing updated cluster config ...
	I0115 10:00:55.229792   29671 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:00:55.229888   29671 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 10:00:55.232724   29671 out.go:177] * Starting worker node multinode-975382-m02 in cluster multinode-975382
	I0115 10:00:55.233995   29671 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:00:55.234015   29671 cache.go:56] Caching tarball of preloaded images
	I0115 10:00:55.234098   29671 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 10:00:55.234111   29671 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:00:55.234212   29671 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 10:00:55.234381   29671 start.go:365] acquiring machines lock for multinode-975382-m02: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:00:55.234482   29671 start.go:369] acquired machines lock for "multinode-975382-m02" in 80.996µs
	I0115 10:00:55.234502   29671 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:00:55.234511   29671 fix.go:54] fixHost starting: m02
	I0115 10:00:55.234766   29671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:00:55.234791   29671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:00:55.248675   29671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I0115 10:00:55.249071   29671 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:00:55.249603   29671 main.go:141] libmachine: Using API Version  1
	I0115 10:00:55.249634   29671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:00:55.249973   29671 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:00:55.250162   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 10:00:55.250324   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetState
	I0115 10:00:55.251890   29671 fix.go:102] recreateIfNeeded on multinode-975382-m02: state=Running err=<nil>
	W0115 10:00:55.251909   29671 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:00:55.253875   29671 out.go:177] * Updating the running kvm2 "multinode-975382-m02" VM ...
	I0115 10:00:55.255261   29671 machine.go:88] provisioning docker machine ...
	I0115 10:00:55.255277   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 10:00:55.255449   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetMachineName
	I0115 10:00:55.255604   29671 buildroot.go:166] provisioning hostname "multinode-975382-m02"
	I0115 10:00:55.255620   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetMachineName
	I0115 10:00:55.255748   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 10:00:55.257750   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.258075   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:00:55.258102   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.258276   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 10:00:55.258452   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:00:55.258580   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:00:55.258712   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 10:00:55.258874   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:00:55.259288   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 10:00:55.259304   29671 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-975382-m02 && echo "multinode-975382-m02" | sudo tee /etc/hostname
	I0115 10:00:55.384258   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-975382-m02
	
	I0115 10:00:55.384296   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 10:00:55.387039   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.387404   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:00:55.387429   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.387594   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 10:00:55.387758   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:00:55.387945   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:00:55.388130   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 10:00:55.388307   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:00:55.388650   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 10:00:55.388669   29671 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-975382-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-975382-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-975382-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:00:55.495290   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:00:55.495316   29671 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:00:55.495334   29671 buildroot.go:174] setting up certificates
	I0115 10:00:55.495345   29671 provision.go:83] configureAuth start
	I0115 10:00:55.495366   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetMachineName
	I0115 10:00:55.495645   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetIP
	I0115 10:00:55.498311   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.498662   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:00:55.498692   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.498826   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 10:00:55.501082   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.501443   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:00:55.501477   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.501604   29671 provision.go:138] copyHostCerts
	I0115 10:00:55.501634   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:00:55.501664   29671 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:00:55.501672   29671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:00:55.501763   29671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:00:55.501851   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:00:55.501882   29671 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:00:55.501892   29671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:00:55.501931   29671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:00:55.501989   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:00:55.502011   29671 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:00:55.502018   29671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:00:55.502050   29671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:00:55.502126   29671 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.multinode-975382-m02 san=[192.168.39.95 192.168.39.95 localhost 127.0.0.1 minikube multinode-975382-m02]
	I0115 10:00:55.974747   29671 provision.go:172] copyRemoteCerts
	I0115 10:00:55.974800   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:00:55.974822   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 10:00:55.977542   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.977897   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:00:55.977928   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:55.978063   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 10:00:55.978256   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:00:55.978450   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 10:00:55.978556   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa Username:docker}
	I0115 10:00:56.063454   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 10:00:56.063526   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0115 10:00:56.087895   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 10:00:56.087967   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 10:00:56.111055   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 10:00:56.111116   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:00:56.133669   29671 provision.go:86] duration metric: configureAuth took 638.31254ms
	I0115 10:00:56.133695   29671 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:00:56.133920   29671 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:00:56.133999   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 10:00:56.136629   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:56.137046   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:00:56.137098   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:00:56.137231   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 10:00:56.137415   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:00:56.137580   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:00:56.137745   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 10:00:56.137895   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:00:56.138215   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 10:00:56.138236   29671 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:02:26.755688   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:02:26.755719   29671 machine.go:91] provisioned docker machine in 1m31.500444236s
	I0115 10:02:26.755732   29671 start.go:300] post-start starting for "multinode-975382-m02" (driver="kvm2")
	I0115 10:02:26.755743   29671 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:02:26.755759   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 10:02:26.756086   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:02:26.756115   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 10:02:26.759371   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:26.759846   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:02:26.759884   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:26.760019   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 10:02:26.760255   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:02:26.760476   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 10:02:26.760677   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa Username:docker}
	I0115 10:02:26.846381   29671 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:02:26.850341   29671 command_runner.go:130] > NAME=Buildroot
	I0115 10:02:26.850361   29671 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0115 10:02:26.850368   29671 command_runner.go:130] > ID=buildroot
	I0115 10:02:26.850375   29671 command_runner.go:130] > VERSION_ID=2021.02.12
	I0115 10:02:26.850381   29671 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0115 10:02:26.850615   29671 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:02:26.850640   29671 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:02:26.850762   29671 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:02:26.850872   29671 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:02:26.850886   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /etc/ssl/certs/134822.pem
	I0115 10:02:26.850995   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:02:26.860992   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:02:26.885696   29671 start.go:303] post-start completed in 129.95237ms
	I0115 10:02:26.885716   29671 fix.go:56] fixHost completed within 1m31.651205396s
	I0115 10:02:26.885739   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 10:02:26.888461   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:26.888841   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:02:26.888869   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:26.889000   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 10:02:26.889218   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:02:26.889387   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:02:26.889525   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 10:02:26.889678   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:02:26.889978   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0115 10:02:26.889988   29671 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:02:26.994930   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705312946.982559126
	
	I0115 10:02:26.994960   29671 fix.go:206] guest clock: 1705312946.982559126
	I0115 10:02:26.994970   29671 fix.go:219] Guest: 2024-01-15 10:02:26.982559126 +0000 UTC Remote: 2024-01-15 10:02:26.885719911 +0000 UTC m=+453.464205549 (delta=96.839215ms)
	I0115 10:02:26.994989   29671 fix.go:190] guest clock delta is within tolerance: 96.839215ms
	I0115 10:02:26.994995   29671 start.go:83] releasing machines lock for "multinode-975382-m02", held for 1m31.76050132s
	I0115 10:02:26.995026   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 10:02:26.995326   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetIP
	I0115 10:02:26.997782   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:26.998151   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:02:26.998177   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:27.005077   29671 out.go:177] * Found network options:
	I0115 10:02:27.006758   29671 out.go:177]   - NO_PROXY=192.168.39.217
	W0115 10:02:27.008528   29671 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 10:02:27.008568   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 10:02:27.009044   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 10:02:27.009233   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 10:02:27.009318   29671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:02:27.009367   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	W0115 10:02:27.009400   29671 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 10:02:27.009480   29671 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:02:27.009506   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 10:02:27.012043   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:27.012105   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:27.012423   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:02:27.012474   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:27.012523   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:02:27.012546   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:27.012584   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 10:02:27.012696   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 10:02:27.012781   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:02:27.012868   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 10:02:27.012935   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 10:02:27.013018   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 10:02:27.013130   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa Username:docker}
	I0115 10:02:27.013129   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa Username:docker}
	I0115 10:02:27.133416   29671 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0115 10:02:27.257523   29671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 10:02:27.263615   29671 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0115 10:02:27.263932   29671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:02:27.263992   29671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:02:27.273040   29671 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0115 10:02:27.273055   29671 start.go:475] detecting cgroup driver to use...
	I0115 10:02:27.273110   29671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:02:27.286679   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:02:27.298954   29671 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:02:27.299000   29671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:02:27.311569   29671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:02:27.324233   29671 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:02:27.444556   29671 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:02:27.585928   29671 docker.go:233] disabling docker service ...
	I0115 10:02:27.585998   29671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:02:27.600251   29671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:02:27.612155   29671 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:02:27.731600   29671 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:02:27.846518   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:02:27.858874   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:02:27.876983   29671 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0115 10:02:27.877038   29671 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:02:27.877080   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:02:27.886230   29671 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:02:27.886292   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:02:27.895152   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:02:27.903768   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:02:27.912655   29671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:02:27.921978   29671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:02:27.929744   29671 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0115 10:02:27.930025   29671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:02:27.937830   29671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:02:28.051159   29671 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:02:28.268204   29671 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:02:28.268266   29671 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:02:28.273625   29671 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0115 10:02:28.273645   29671 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0115 10:02:28.273652   29671 command_runner.go:130] > Device: 16h/22d	Inode: 1265        Links: 1
	I0115 10:02:28.273661   29671 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 10:02:28.273669   29671 command_runner.go:130] > Access: 2024-01-15 10:02:28.190346012 +0000
	I0115 10:02:28.273682   29671 command_runner.go:130] > Modify: 2024-01-15 10:02:28.190346012 +0000
	I0115 10:02:28.273693   29671 command_runner.go:130] > Change: 2024-01-15 10:02:28.190346012 +0000
	I0115 10:02:28.273700   29671 command_runner.go:130] >  Birth: -
	I0115 10:02:28.273843   29671 start.go:543] Will wait 60s for crictl version
	I0115 10:02:28.273894   29671 ssh_runner.go:195] Run: which crictl
	I0115 10:02:28.277645   29671 command_runner.go:130] > /usr/bin/crictl
	I0115 10:02:28.277708   29671 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:02:28.320901   29671 command_runner.go:130] > Version:  0.1.0
	I0115 10:02:28.320929   29671 command_runner.go:130] > RuntimeName:  cri-o
	I0115 10:02:28.320937   29671 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0115 10:02:28.320948   29671 command_runner.go:130] > RuntimeApiVersion:  v1
	I0115 10:02:28.321043   29671 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:02:28.321113   29671 ssh_runner.go:195] Run: crio --version
	I0115 10:02:28.372898   29671 command_runner.go:130] > crio version 1.24.1
	I0115 10:02:28.372925   29671 command_runner.go:130] > Version:          1.24.1
	I0115 10:02:28.372936   29671 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 10:02:28.372943   29671 command_runner.go:130] > GitTreeState:     dirty
	I0115 10:02:28.372972   29671 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 10:02:28.372986   29671 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 10:02:28.372993   29671 command_runner.go:130] > Compiler:         gc
	I0115 10:02:28.373004   29671 command_runner.go:130] > Platform:         linux/amd64
	I0115 10:02:28.373018   29671 command_runner.go:130] > Linkmode:         dynamic
	I0115 10:02:28.373030   29671 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 10:02:28.373035   29671 command_runner.go:130] > SeccompEnabled:   true
	I0115 10:02:28.373040   29671 command_runner.go:130] > AppArmorEnabled:  false
	I0115 10:02:28.373106   29671 ssh_runner.go:195] Run: crio --version
	I0115 10:02:28.421208   29671 command_runner.go:130] > crio version 1.24.1
	I0115 10:02:28.421234   29671 command_runner.go:130] > Version:          1.24.1
	I0115 10:02:28.421245   29671 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 10:02:28.421252   29671 command_runner.go:130] > GitTreeState:     dirty
	I0115 10:02:28.421261   29671 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 10:02:28.421268   29671 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 10:02:28.421275   29671 command_runner.go:130] > Compiler:         gc
	I0115 10:02:28.421283   29671 command_runner.go:130] > Platform:         linux/amd64
	I0115 10:02:28.421291   29671 command_runner.go:130] > Linkmode:         dynamic
	I0115 10:02:28.421302   29671 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 10:02:28.421307   29671 command_runner.go:130] > SeccompEnabled:   true
	I0115 10:02:28.421311   29671 command_runner.go:130] > AppArmorEnabled:  false
	I0115 10:02:28.424306   29671 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:02:28.425652   29671 out.go:177]   - env NO_PROXY=192.168.39.217
	I0115 10:02:28.427100   29671 main.go:141] libmachine: (multinode-975382-m02) Calling .GetIP
	I0115 10:02:28.429974   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:28.430376   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 10:02:28.430432   29671 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 10:02:28.430650   29671 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 10:02:28.436320   29671 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0115 10:02:28.436454   29671 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382 for IP: 192.168.39.95
	I0115 10:02:28.436480   29671 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:02:28.436617   29671 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:02:28.436662   29671 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:02:28.436673   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 10:02:28.436690   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 10:02:28.436707   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 10:02:28.436724   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 10:02:28.436786   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:02:28.436833   29671 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:02:28.436848   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:02:28.436882   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:02:28.436917   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:02:28.436951   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:02:28.437003   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:02:28.437045   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /usr/share/ca-certificates/134822.pem
	I0115 10:02:28.437064   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:02:28.437082   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem -> /usr/share/ca-certificates/13482.pem
	I0115 10:02:28.437422   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:02:28.463323   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:02:28.485953   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:02:28.507563   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:02:28.528104   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:02:28.549202   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:02:28.572190   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:02:28.594306   29671 ssh_runner.go:195] Run: openssl version
	I0115 10:02:28.599593   29671 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0115 10:02:28.599907   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:02:28.608997   29671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:02:28.613230   29671 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:02:28.613601   29671 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:02:28.613645   29671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:02:28.618829   29671 command_runner.go:130] > b5213941
	I0115 10:02:28.618889   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:02:28.626947   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:02:28.636226   29671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:02:28.640263   29671 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:02:28.640412   29671 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:02:28.640458   29671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:02:28.645318   29671 command_runner.go:130] > 51391683
	I0115 10:02:28.645510   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:02:28.654058   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:02:28.663464   29671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:02:28.667467   29671 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:02:28.667622   29671 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:02:28.667669   29671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:02:28.672739   29671 command_runner.go:130] > 3ec20f2e
	I0115 10:02:28.672797   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:02:28.681284   29671 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:02:28.684726   29671 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 10:02:28.684979   29671 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 10:02:28.685071   29671 ssh_runner.go:195] Run: crio config
	I0115 10:02:28.737224   29671 command_runner.go:130] ! time="2024-01-15 10:02:28.725061202Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0115 10:02:28.737251   29671 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0115 10:02:28.749043   29671 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0115 10:02:28.749070   29671 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0115 10:02:28.749080   29671 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0115 10:02:28.749086   29671 command_runner.go:130] > #
	I0115 10:02:28.749101   29671 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0115 10:02:28.749110   29671 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0115 10:02:28.749123   29671 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0115 10:02:28.749139   29671 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0115 10:02:28.749149   29671 command_runner.go:130] > # reload'.
	I0115 10:02:28.749159   29671 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0115 10:02:28.749172   29671 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0115 10:02:28.749185   29671 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0115 10:02:28.749197   29671 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0115 10:02:28.749206   29671 command_runner.go:130] > [crio]
	I0115 10:02:28.749219   29671 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0115 10:02:28.749230   29671 command_runner.go:130] > # containers images, in this directory.
	I0115 10:02:28.749238   29671 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0115 10:02:28.749254   29671 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0115 10:02:28.749265   29671 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0115 10:02:28.749278   29671 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0115 10:02:28.749290   29671 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0115 10:02:28.749301   29671 command_runner.go:130] > storage_driver = "overlay"
	I0115 10:02:28.749313   29671 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0115 10:02:28.749325   29671 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0115 10:02:28.749332   29671 command_runner.go:130] > storage_option = [
	I0115 10:02:28.749343   29671 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0115 10:02:28.749348   29671 command_runner.go:130] > ]
	I0115 10:02:28.749361   29671 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0115 10:02:28.749373   29671 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0115 10:02:28.749383   29671 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0115 10:02:28.749395   29671 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0115 10:02:28.749409   29671 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0115 10:02:28.749419   29671 command_runner.go:130] > # always happen on a node reboot
	I0115 10:02:28.749428   29671 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0115 10:02:28.749439   29671 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0115 10:02:28.749449   29671 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0115 10:02:28.749459   29671 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0115 10:02:28.749467   29671 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0115 10:02:28.749474   29671 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0115 10:02:28.749484   29671 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0115 10:02:28.749491   29671 command_runner.go:130] > # internal_wipe = true
	I0115 10:02:28.749496   29671 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0115 10:02:28.749504   29671 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0115 10:02:28.749510   29671 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0115 10:02:28.749515   29671 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0115 10:02:28.749521   29671 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0115 10:02:28.749526   29671 command_runner.go:130] > [crio.api]
	I0115 10:02:28.749532   29671 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0115 10:02:28.749538   29671 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0115 10:02:28.749543   29671 command_runner.go:130] > # IP address on which the stream server will listen.
	I0115 10:02:28.749551   29671 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0115 10:02:28.749557   29671 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0115 10:02:28.749564   29671 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0115 10:02:28.749569   29671 command_runner.go:130] > # stream_port = "0"
	I0115 10:02:28.749575   29671 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0115 10:02:28.749582   29671 command_runner.go:130] > # stream_enable_tls = false
	I0115 10:02:28.749588   29671 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0115 10:02:28.749592   29671 command_runner.go:130] > # stream_idle_timeout = ""
	I0115 10:02:28.749600   29671 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0115 10:02:28.749606   29671 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0115 10:02:28.749612   29671 command_runner.go:130] > # minutes.
	I0115 10:02:28.749616   29671 command_runner.go:130] > # stream_tls_cert = ""
	I0115 10:02:28.749624   29671 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0115 10:02:28.749630   29671 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0115 10:02:28.749637   29671 command_runner.go:130] > # stream_tls_key = ""
	I0115 10:02:28.749647   29671 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0115 10:02:28.749660   29671 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0115 10:02:28.749672   29671 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0115 10:02:28.749678   29671 command_runner.go:130] > # stream_tls_ca = ""
	I0115 10:02:28.749693   29671 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 10:02:28.749704   29671 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0115 10:02:28.749719   29671 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 10:02:28.749729   29671 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0115 10:02:28.749760   29671 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0115 10:02:28.749771   29671 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0115 10:02:28.749775   29671 command_runner.go:130] > [crio.runtime]
	I0115 10:02:28.749782   29671 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0115 10:02:28.749788   29671 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0115 10:02:28.749792   29671 command_runner.go:130] > # "nofile=1024:2048"
	I0115 10:02:28.749798   29671 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0115 10:02:28.749805   29671 command_runner.go:130] > # default_ulimits = [
	I0115 10:02:28.749809   29671 command_runner.go:130] > # ]
	I0115 10:02:28.749818   29671 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0115 10:02:28.749822   29671 command_runner.go:130] > # no_pivot = false
	I0115 10:02:28.749830   29671 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0115 10:02:28.749836   29671 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0115 10:02:28.749843   29671 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0115 10:02:28.749849   29671 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0115 10:02:28.749855   29671 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0115 10:02:28.749861   29671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 10:02:28.749869   29671 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0115 10:02:28.749873   29671 command_runner.go:130] > # Cgroup setting for conmon
	I0115 10:02:28.749879   29671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0115 10:02:28.749887   29671 command_runner.go:130] > conmon_cgroup = "pod"
	I0115 10:02:28.749893   29671 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0115 10:02:28.749901   29671 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0115 10:02:28.749907   29671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 10:02:28.749913   29671 command_runner.go:130] > conmon_env = [
	I0115 10:02:28.749919   29671 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0115 10:02:28.749929   29671 command_runner.go:130] > ]
	I0115 10:02:28.749937   29671 command_runner.go:130] > # Additional environment variables to set for all the
	I0115 10:02:28.749942   29671 command_runner.go:130] > # containers. These are overridden if set in the
	I0115 10:02:28.749950   29671 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0115 10:02:28.749954   29671 command_runner.go:130] > # default_env = [
	I0115 10:02:28.749957   29671 command_runner.go:130] > # ]
	I0115 10:02:28.749964   29671 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0115 10:02:28.749970   29671 command_runner.go:130] > # selinux = false
	I0115 10:02:28.749977   29671 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0115 10:02:28.749985   29671 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0115 10:02:28.749991   29671 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0115 10:02:28.749997   29671 command_runner.go:130] > # seccomp_profile = ""
	I0115 10:02:28.750003   29671 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0115 10:02:28.750011   29671 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0115 10:02:28.750017   29671 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0115 10:02:28.750025   29671 command_runner.go:130] > # which might increase security.
	I0115 10:02:28.750029   29671 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0115 10:02:28.750039   29671 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0115 10:02:28.750045   29671 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0115 10:02:28.750052   29671 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0115 10:02:28.750058   29671 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0115 10:02:28.750066   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:02:28.750070   29671 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0115 10:02:28.750076   29671 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0115 10:02:28.750081   29671 command_runner.go:130] > # the cgroup blockio controller.
	I0115 10:02:28.750086   29671 command_runner.go:130] > # blockio_config_file = ""
	I0115 10:02:28.750094   29671 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0115 10:02:28.750099   29671 command_runner.go:130] > # irqbalance daemon.
	I0115 10:02:28.750105   29671 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0115 10:02:28.750112   29671 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0115 10:02:28.750119   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:02:28.750123   29671 command_runner.go:130] > # rdt_config_file = ""
	I0115 10:02:28.750131   29671 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0115 10:02:28.750135   29671 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0115 10:02:28.750143   29671 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0115 10:02:28.750148   29671 command_runner.go:130] > # separate_pull_cgroup = ""
	I0115 10:02:28.750156   29671 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0115 10:02:28.750162   29671 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0115 10:02:28.750168   29671 command_runner.go:130] > # will be added.
	I0115 10:02:28.750172   29671 command_runner.go:130] > # default_capabilities = [
	I0115 10:02:28.750178   29671 command_runner.go:130] > # 	"CHOWN",
	I0115 10:02:28.750182   29671 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0115 10:02:28.750186   29671 command_runner.go:130] > # 	"FSETID",
	I0115 10:02:28.750190   29671 command_runner.go:130] > # 	"FOWNER",
	I0115 10:02:28.750196   29671 command_runner.go:130] > # 	"SETGID",
	I0115 10:02:28.750200   29671 command_runner.go:130] > # 	"SETUID",
	I0115 10:02:28.750204   29671 command_runner.go:130] > # 	"SETPCAP",
	I0115 10:02:28.750209   29671 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0115 10:02:28.750212   29671 command_runner.go:130] > # 	"KILL",
	I0115 10:02:28.750218   29671 command_runner.go:130] > # ]
	I0115 10:02:28.750224   29671 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0115 10:02:28.750232   29671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 10:02:28.750236   29671 command_runner.go:130] > # default_sysctls = [
	I0115 10:02:28.750241   29671 command_runner.go:130] > # ]
	I0115 10:02:28.750245   29671 command_runner.go:130] > # List of devices on the host that a
	I0115 10:02:28.750254   29671 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0115 10:02:28.750258   29671 command_runner.go:130] > # allowed_devices = [
	I0115 10:02:28.750262   29671 command_runner.go:130] > # 	"/dev/fuse",
	I0115 10:02:28.750266   29671 command_runner.go:130] > # ]
	I0115 10:02:28.750270   29671 command_runner.go:130] > # List of additional devices. specified as
	I0115 10:02:28.750278   29671 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0115 10:02:28.750287   29671 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0115 10:02:28.750302   29671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 10:02:28.750308   29671 command_runner.go:130] > # additional_devices = [
	I0115 10:02:28.750311   29671 command_runner.go:130] > # ]
	I0115 10:02:28.750316   29671 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0115 10:02:28.750321   29671 command_runner.go:130] > # cdi_spec_dirs = [
	I0115 10:02:28.750325   29671 command_runner.go:130] > # 	"/etc/cdi",
	I0115 10:02:28.750330   29671 command_runner.go:130] > # 	"/var/run/cdi",
	I0115 10:02:28.750334   29671 command_runner.go:130] > # ]
	I0115 10:02:28.750341   29671 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0115 10:02:28.750349   29671 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0115 10:02:28.750353   29671 command_runner.go:130] > # Defaults to false.
	I0115 10:02:28.750358   29671 command_runner.go:130] > # device_ownership_from_security_context = false
	I0115 10:02:28.750365   29671 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0115 10:02:28.750373   29671 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0115 10:02:28.750380   29671 command_runner.go:130] > # hooks_dir = [
	I0115 10:02:28.750385   29671 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0115 10:02:28.750391   29671 command_runner.go:130] > # ]
	I0115 10:02:28.750397   29671 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0115 10:02:28.750406   29671 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0115 10:02:28.750411   29671 command_runner.go:130] > # its default mounts from the following two files:
	I0115 10:02:28.750428   29671 command_runner.go:130] > #
	I0115 10:02:28.750438   29671 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0115 10:02:28.750452   29671 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0115 10:02:28.750458   29671 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0115 10:02:28.750464   29671 command_runner.go:130] > #
	I0115 10:02:28.750470   29671 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0115 10:02:28.750479   29671 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0115 10:02:28.750485   29671 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0115 10:02:28.750493   29671 command_runner.go:130] > #      only add mounts it finds in this file.
	I0115 10:02:28.750497   29671 command_runner.go:130] > #
	I0115 10:02:28.750503   29671 command_runner.go:130] > # default_mounts_file = ""
	I0115 10:02:28.750508   29671 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0115 10:02:28.750517   29671 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0115 10:02:28.750521   29671 command_runner.go:130] > pids_limit = 1024
	I0115 10:02:28.750527   29671 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0115 10:02:28.750535   29671 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0115 10:02:28.750542   29671 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0115 10:02:28.750552   29671 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0115 10:02:28.750556   29671 command_runner.go:130] > # log_size_max = -1
	I0115 10:02:28.750563   29671 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0115 10:02:28.750569   29671 command_runner.go:130] > # log_to_journald = false
	I0115 10:02:28.750575   29671 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0115 10:02:28.750581   29671 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0115 10:02:28.750586   29671 command_runner.go:130] > # Path to directory for container attach sockets.
	I0115 10:02:28.750593   29671 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0115 10:02:28.750598   29671 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0115 10:02:28.750604   29671 command_runner.go:130] > # bind_mount_prefix = ""
	I0115 10:02:28.750610   29671 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0115 10:02:28.750616   29671 command_runner.go:130] > # read_only = false
	I0115 10:02:28.750622   29671 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0115 10:02:28.750630   29671 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0115 10:02:28.750634   29671 command_runner.go:130] > # live configuration reload.
	I0115 10:02:28.750647   29671 command_runner.go:130] > # log_level = "info"
	I0115 10:02:28.750660   29671 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0115 10:02:28.750671   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:02:28.750681   29671 command_runner.go:130] > # log_filter = ""
	I0115 10:02:28.750690   29671 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0115 10:02:28.750702   29671 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0115 10:02:28.750711   29671 command_runner.go:130] > # separated by comma.
	I0115 10:02:28.750718   29671 command_runner.go:130] > # uid_mappings = ""
	I0115 10:02:28.750731   29671 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0115 10:02:28.750744   29671 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0115 10:02:28.750751   29671 command_runner.go:130] > # separated by comma.
	I0115 10:02:28.750755   29671 command_runner.go:130] > # gid_mappings = ""
	I0115 10:02:28.750763   29671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0115 10:02:28.750769   29671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 10:02:28.750776   29671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 10:02:28.750782   29671 command_runner.go:130] > # minimum_mappable_uid = -1
	I0115 10:02:28.750788   29671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0115 10:02:28.750796   29671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 10:02:28.750803   29671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 10:02:28.750809   29671 command_runner.go:130] > # minimum_mappable_gid = -1
	I0115 10:02:28.750815   29671 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0115 10:02:28.750823   29671 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0115 10:02:28.750829   29671 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0115 10:02:28.750835   29671 command_runner.go:130] > # ctr_stop_timeout = 30
	I0115 10:02:28.750840   29671 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0115 10:02:28.750848   29671 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0115 10:02:28.750853   29671 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0115 10:02:28.750858   29671 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0115 10:02:28.750865   29671 command_runner.go:130] > drop_infra_ctr = false
	I0115 10:02:28.750871   29671 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0115 10:02:28.750880   29671 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0115 10:02:28.750893   29671 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0115 10:02:28.750900   29671 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0115 10:02:28.750906   29671 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0115 10:02:28.750913   29671 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0115 10:02:28.750918   29671 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0115 10:02:28.750930   29671 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0115 10:02:28.750936   29671 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0115 10:02:28.750943   29671 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0115 10:02:28.750949   29671 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0115 10:02:28.750959   29671 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0115 10:02:28.750964   29671 command_runner.go:130] > # default_runtime = "runc"
	I0115 10:02:28.750972   29671 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0115 10:02:28.750979   29671 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0115 10:02:28.750991   29671 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0115 10:02:28.750998   29671 command_runner.go:130] > # creation as a file is not desired either.
	I0115 10:02:28.751006   29671 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0115 10:02:28.751014   29671 command_runner.go:130] > # the hostname is being managed dynamically.
	I0115 10:02:28.751018   29671 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0115 10:02:28.751023   29671 command_runner.go:130] > # ]
	I0115 10:02:28.751029   29671 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0115 10:02:28.751035   29671 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0115 10:02:28.751044   29671 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0115 10:02:28.751050   29671 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0115 10:02:28.751056   29671 command_runner.go:130] > #
	I0115 10:02:28.751061   29671 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0115 10:02:28.751068   29671 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0115 10:02:28.751072   29671 command_runner.go:130] > #  runtime_type = "oci"
	I0115 10:02:28.751077   29671 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0115 10:02:28.751082   29671 command_runner.go:130] > #  privileged_without_host_devices = false
	I0115 10:02:28.751088   29671 command_runner.go:130] > #  allowed_annotations = []
	I0115 10:02:28.751092   29671 command_runner.go:130] > # Where:
	I0115 10:02:28.751097   29671 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0115 10:02:28.751105   29671 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0115 10:02:28.751112   29671 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0115 10:02:28.751120   29671 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0115 10:02:28.751124   29671 command_runner.go:130] > #   in $PATH.
	I0115 10:02:28.751132   29671 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0115 10:02:28.751137   29671 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0115 10:02:28.751145   29671 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0115 10:02:28.751149   29671 command_runner.go:130] > #   state.
	I0115 10:02:28.751158   29671 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0115 10:02:28.751164   29671 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0115 10:02:28.751173   29671 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0115 10:02:28.751178   29671 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0115 10:02:28.751187   29671 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0115 10:02:28.751193   29671 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0115 10:02:28.751198   29671 command_runner.go:130] > #   The currently recognized values are:
	I0115 10:02:28.751206   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0115 10:02:28.751213   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0115 10:02:28.751222   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0115 10:02:28.751228   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0115 10:02:28.751237   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0115 10:02:28.751243   29671 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0115 10:02:28.751251   29671 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0115 10:02:28.751260   29671 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0115 10:02:28.751265   29671 command_runner.go:130] > #   should be moved to the container's cgroup
	I0115 10:02:28.751272   29671 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0115 10:02:28.751276   29671 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0115 10:02:28.751280   29671 command_runner.go:130] > runtime_type = "oci"
	I0115 10:02:28.751285   29671 command_runner.go:130] > runtime_root = "/run/runc"
	I0115 10:02:28.751291   29671 command_runner.go:130] > runtime_config_path = ""
	I0115 10:02:28.751295   29671 command_runner.go:130] > monitor_path = ""
	I0115 10:02:28.751299   29671 command_runner.go:130] > monitor_cgroup = ""
	I0115 10:02:28.751303   29671 command_runner.go:130] > monitor_exec_cgroup = ""
	I0115 10:02:28.751312   29671 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0115 10:02:28.751316   29671 command_runner.go:130] > # running containers
	I0115 10:02:28.751321   29671 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0115 10:02:28.751327   29671 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0115 10:02:28.751354   29671 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0115 10:02:28.751362   29671 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0115 10:02:28.751367   29671 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0115 10:02:28.751372   29671 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0115 10:02:28.751379   29671 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0115 10:02:28.751383   29671 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0115 10:02:28.751390   29671 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0115 10:02:28.751395   29671 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0115 10:02:28.751402   29671 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0115 10:02:28.751410   29671 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0115 10:02:28.751419   29671 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0115 10:02:28.751427   29671 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0115 10:02:28.751438   29671 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0115 10:02:28.751445   29671 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0115 10:02:28.751454   29671 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0115 10:02:28.751462   29671 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0115 10:02:28.751470   29671 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0115 10:02:28.751477   29671 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0115 10:02:28.751483   29671 command_runner.go:130] > # Example:
	I0115 10:02:28.751488   29671 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0115 10:02:28.751493   29671 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0115 10:02:28.751498   29671 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0115 10:02:28.751506   29671 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0115 10:02:28.751510   29671 command_runner.go:130] > # cpuset = 0
	I0115 10:02:28.751515   29671 command_runner.go:130] > # cpushares = "0-1"
	I0115 10:02:28.751519   29671 command_runner.go:130] > # Where:
	I0115 10:02:28.751524   29671 command_runner.go:130] > # The workload name is workload-type.
	I0115 10:02:28.751531   29671 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0115 10:02:28.751539   29671 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0115 10:02:28.751545   29671 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0115 10:02:28.751552   29671 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0115 10:02:28.751560   29671 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0115 10:02:28.751564   29671 command_runner.go:130] > # 
	I0115 10:02:28.751573   29671 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0115 10:02:28.751576   29671 command_runner.go:130] > #
	I0115 10:02:28.751584   29671 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0115 10:02:28.751591   29671 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0115 10:02:28.751599   29671 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0115 10:02:28.751605   29671 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0115 10:02:28.751613   29671 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0115 10:02:28.751617   29671 command_runner.go:130] > [crio.image]
	I0115 10:02:28.751626   29671 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0115 10:02:28.751630   29671 command_runner.go:130] > # default_transport = "docker://"
	I0115 10:02:28.751637   29671 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0115 10:02:28.751649   29671 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0115 10:02:28.751660   29671 command_runner.go:130] > # global_auth_file = ""
	I0115 10:02:28.751670   29671 command_runner.go:130] > # The image used to instantiate infra containers.
	I0115 10:02:28.751680   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:02:28.751689   29671 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0115 10:02:28.751703   29671 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0115 10:02:28.751716   29671 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0115 10:02:28.751725   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:02:28.751735   29671 command_runner.go:130] > # pause_image_auth_file = ""
	I0115 10:02:28.751744   29671 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0115 10:02:28.751756   29671 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0115 10:02:28.751767   29671 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0115 10:02:28.751775   29671 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0115 10:02:28.751780   29671 command_runner.go:130] > # pause_command = "/pause"
	I0115 10:02:28.751789   29671 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0115 10:02:28.751795   29671 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0115 10:02:28.751804   29671 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0115 10:02:28.751811   29671 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0115 10:02:28.751817   29671 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0115 10:02:28.751822   29671 command_runner.go:130] > # signature_policy = ""
	I0115 10:02:28.751827   29671 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0115 10:02:28.751836   29671 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0115 10:02:28.751840   29671 command_runner.go:130] > # changing them here.
	I0115 10:02:28.751847   29671 command_runner.go:130] > # insecure_registries = [
	I0115 10:02:28.751850   29671 command_runner.go:130] > # ]
	I0115 10:02:28.751860   29671 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0115 10:02:28.751865   29671 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0115 10:02:28.751872   29671 command_runner.go:130] > # image_volumes = "mkdir"
	I0115 10:02:28.751877   29671 command_runner.go:130] > # Temporary directory to use for storing big files
	I0115 10:02:28.751884   29671 command_runner.go:130] > # big_files_temporary_dir = ""
	I0115 10:02:28.751890   29671 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0115 10:02:28.751895   29671 command_runner.go:130] > # CNI plugins.
	I0115 10:02:28.751899   29671 command_runner.go:130] > [crio.network]
	I0115 10:02:28.751907   29671 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0115 10:02:28.751913   29671 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0115 10:02:28.751919   29671 command_runner.go:130] > # cni_default_network = ""
	I0115 10:02:28.751929   29671 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0115 10:02:28.751937   29671 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0115 10:02:28.751943   29671 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0115 10:02:28.751949   29671 command_runner.go:130] > # plugin_dirs = [
	I0115 10:02:28.751953   29671 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0115 10:02:28.751958   29671 command_runner.go:130] > # ]
	I0115 10:02:28.751964   29671 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0115 10:02:28.751970   29671 command_runner.go:130] > [crio.metrics]
	I0115 10:02:28.751974   29671 command_runner.go:130] > # Globally enable or disable metrics support.
	I0115 10:02:28.751981   29671 command_runner.go:130] > enable_metrics = true
	I0115 10:02:28.751986   29671 command_runner.go:130] > # Specify enabled metrics collectors.
	I0115 10:02:28.751993   29671 command_runner.go:130] > # Per default all metrics are enabled.
	I0115 10:02:28.751999   29671 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0115 10:02:28.752007   29671 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0115 10:02:28.752013   29671 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0115 10:02:28.752018   29671 command_runner.go:130] > # metrics_collectors = [
	I0115 10:02:28.752021   29671 command_runner.go:130] > # 	"operations",
	I0115 10:02:28.752030   29671 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0115 10:02:28.752035   29671 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0115 10:02:28.752039   29671 command_runner.go:130] > # 	"operations_errors",
	I0115 10:02:28.752043   29671 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0115 10:02:28.752050   29671 command_runner.go:130] > # 	"image_pulls_by_name",
	I0115 10:02:28.752055   29671 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0115 10:02:28.752061   29671 command_runner.go:130] > # 	"image_pulls_failures",
	I0115 10:02:28.752066   29671 command_runner.go:130] > # 	"image_pulls_successes",
	I0115 10:02:28.752073   29671 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0115 10:02:28.752077   29671 command_runner.go:130] > # 	"image_layer_reuse",
	I0115 10:02:28.752082   29671 command_runner.go:130] > # 	"containers_oom_total",
	I0115 10:02:28.752088   29671 command_runner.go:130] > # 	"containers_oom",
	I0115 10:02:28.752092   29671 command_runner.go:130] > # 	"processes_defunct",
	I0115 10:02:28.752096   29671 command_runner.go:130] > # 	"operations_total",
	I0115 10:02:28.752103   29671 command_runner.go:130] > # 	"operations_latency_seconds",
	I0115 10:02:28.752108   29671 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0115 10:02:28.752114   29671 command_runner.go:130] > # 	"operations_errors_total",
	I0115 10:02:28.752119   29671 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0115 10:02:28.752123   29671 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0115 10:02:28.752130   29671 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0115 10:02:28.752135   29671 command_runner.go:130] > # 	"image_pulls_success_total",
	I0115 10:02:28.752142   29671 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0115 10:02:28.752146   29671 command_runner.go:130] > # 	"containers_oom_count_total",
	I0115 10:02:28.752151   29671 command_runner.go:130] > # ]
	I0115 10:02:28.752156   29671 command_runner.go:130] > # The port on which the metrics server will listen.
	I0115 10:02:28.752162   29671 command_runner.go:130] > # metrics_port = 9090
	I0115 10:02:28.752167   29671 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0115 10:02:28.752173   29671 command_runner.go:130] > # metrics_socket = ""
	I0115 10:02:28.752178   29671 command_runner.go:130] > # The certificate for the secure metrics server.
	I0115 10:02:28.752186   29671 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0115 10:02:28.752192   29671 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0115 10:02:28.752200   29671 command_runner.go:130] > # certificate on any modification event.
	I0115 10:02:28.752204   29671 command_runner.go:130] > # metrics_cert = ""
	I0115 10:02:28.752209   29671 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0115 10:02:28.752215   29671 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0115 10:02:28.752219   29671 command_runner.go:130] > # metrics_key = ""
	I0115 10:02:28.752225   29671 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0115 10:02:28.752231   29671 command_runner.go:130] > [crio.tracing]
	I0115 10:02:28.752236   29671 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0115 10:02:28.752243   29671 command_runner.go:130] > # enable_tracing = false
	I0115 10:02:28.752248   29671 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0115 10:02:28.752255   29671 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0115 10:02:28.752260   29671 command_runner.go:130] > # Number of samples to collect per million spans.
	I0115 10:02:28.752267   29671 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0115 10:02:28.752273   29671 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0115 10:02:28.752279   29671 command_runner.go:130] > [crio.stats]
	I0115 10:02:28.752284   29671 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0115 10:02:28.752289   29671 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0115 10:02:28.752296   29671 command_runner.go:130] > # stats_collection_period = 0
	I0115 10:02:28.752354   29671 cni.go:84] Creating CNI manager for ""
	I0115 10:02:28.752363   29671 cni.go:136] 3 nodes found, recommending kindnet
	I0115 10:02:28.752371   29671 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:02:28.752388   29671 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-975382 NodeName:multinode-975382-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:02:28.752487   29671 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-975382-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:02:28.752539   29671 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-975382-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:02:28.752585   29671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:02:28.761762   29671 command_runner.go:130] > kubeadm
	I0115 10:02:28.761785   29671 command_runner.go:130] > kubectl
	I0115 10:02:28.761790   29671 command_runner.go:130] > kubelet
	I0115 10:02:28.762004   29671 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:02:28.762094   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0115 10:02:28.770521   29671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0115 10:02:28.787342   29671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:02:28.803475   29671 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0115 10:02:28.807029   29671 command_runner.go:130] > 192.168.39.217	control-plane.minikube.internal
	I0115 10:02:28.807102   29671 host.go:66] Checking if "multinode-975382" exists ...
	I0115 10:02:28.807378   29671 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:02:28.807436   29671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:02:28.807462   29671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:02:28.821599   29671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0115 10:02:28.821982   29671 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:02:28.822401   29671 main.go:141] libmachine: Using API Version  1
	I0115 10:02:28.822446   29671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:02:28.822741   29671 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:02:28.822933   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:02:28.823087   29671 start.go:304] JoinCluster: &{Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:02:28.823187   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0115 10:02:28.823201   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:02:28.825661   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:02:28.826050   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:02:28.826080   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:02:28.826220   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:02:28.826404   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:02:28.826565   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:02:28.826720   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 10:02:29.007345   29671 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 6pu17g.vg7vecxxn5jsm0bs --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 10:02:29.014320   29671 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 10:02:29.014357   29671 host.go:66] Checking if "multinode-975382" exists ...
	I0115 10:02:29.014664   29671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:02:29.014689   29671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:02:29.029274   29671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0115 10:02:29.029698   29671 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:02:29.030089   29671 main.go:141] libmachine: Using API Version  1
	I0115 10:02:29.030131   29671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:02:29.030412   29671 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:02:29.030594   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:02:29.030787   29671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-975382-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0115 10:02:29.030811   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:02:29.033259   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:02:29.033600   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:02:29.033624   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:02:29.033750   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:02:29.033902   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:02:29.034005   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:02:29.034111   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 10:02:29.221874   29671 command_runner.go:130] > node/multinode-975382-m02 cordoned
	I0115 10:02:32.288136   29671 command_runner.go:130] > pod "busybox-5bc68d56bd-pwx96" has DeletionTimestamp older than 1 seconds, skipping
	I0115 10:02:32.288158   29671 command_runner.go:130] > node/multinode-975382-m02 drained
	I0115 10:02:32.289858   29671 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0115 10:02:32.289885   29671 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-pd2q7, kube-system/kube-proxy-znv78
	I0115 10:02:32.289913   29671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-975382-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.259099905s)
	I0115 10:02:32.289933   29671 node.go:108] successfully drained node "m02"
	I0115 10:02:32.290300   29671 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:02:32.290569   29671 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 10:02:32.290961   29671 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0115 10:02:32.291025   29671 round_trippers.go:463] DELETE https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:02:32.291038   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:32.291049   29671 round_trippers.go:473]     Content-Type: application/json
	I0115 10:02:32.291058   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:32.291068   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:32.304069   29671 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0115 10:02:32.304090   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:32.304097   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:32.304103   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:32.304108   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:32.304115   29671 round_trippers.go:580]     Content-Length: 171
	I0115 10:02:32.304123   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:32 GMT
	I0115 10:02:32.304131   29671 round_trippers.go:580]     Audit-Id: 21103266-9947-4dd7-8ed1-822ff1bb68f6
	I0115 10:02:32.304147   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:32.304173   29671 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-975382-m02","kind":"nodes","uid":"f52a36d4-5266-4815-8cf8-78296db0efd7"}}
	I0115 10:02:32.304198   29671 node.go:124] successfully deleted node "m02"
	I0115 10:02:32.304209   29671 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 10:02:32.304237   29671 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 10:02:32.304256   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6pu17g.vg7vecxxn5jsm0bs --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-975382-m02"
	I0115 10:02:32.362522   29671 command_runner.go:130] ! W0115 10:02:32.350192    2655 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0115 10:02:32.362615   29671 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0115 10:02:32.513109   29671 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0115 10:02:32.513140   29671 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0115 10:02:33.275396   29671 command_runner.go:130] > [preflight] Running pre-flight checks
	I0115 10:02:33.275427   29671 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0115 10:02:33.275442   29671 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0115 10:02:33.275455   29671 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 10:02:33.275465   29671 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 10:02:33.275473   29671 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0115 10:02:33.275484   29671 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0115 10:02:33.275496   29671 command_runner.go:130] > This node has joined the cluster:
	I0115 10:02:33.275502   29671 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0115 10:02:33.275511   29671 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0115 10:02:33.275518   29671 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0115 10:02:33.276051   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0115 10:02:33.519318   29671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=multinode-975382 minikube.k8s.io/updated_at=2024_01_15T10_02_33_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:02:33.634287   29671 command_runner.go:130] > node/multinode-975382-m02 labeled
	I0115 10:02:33.634314   29671 command_runner.go:130] > node/multinode-975382-m03 labeled
	I0115 10:02:33.634339   29671 start.go:306] JoinCluster complete in 4.811252792s
	I0115 10:02:33.634352   29671 cni.go:84] Creating CNI manager for ""
	I0115 10:02:33.634359   29671 cni.go:136] 3 nodes found, recommending kindnet
	I0115 10:02:33.634428   29671 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 10:02:33.640502   29671 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0115 10:02:33.640536   29671 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0115 10:02:33.640549   29671 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0115 10:02:33.640559   29671 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 10:02:33.640570   29671 command_runner.go:130] > Access: 2024-01-15 10:00:04.443236172 +0000
	I0115 10:02:33.640582   29671 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0115 10:02:33.640645   29671 command_runner.go:130] > Change: 2024-01-15 10:00:02.526236172 +0000
	I0115 10:02:33.640653   29671 command_runner.go:130] >  Birth: -
	I0115 10:02:33.640730   29671 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 10:02:33.640751   29671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 10:02:33.659886   29671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 10:02:34.024765   29671 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0115 10:02:34.030276   29671 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0115 10:02:34.033125   29671 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0115 10:02:34.045999   29671 command_runner.go:130] > daemonset.apps/kindnet configured
	I0115 10:02:34.048936   29671 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:02:34.049235   29671 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 10:02:34.049612   29671 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 10:02:34.049628   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.049639   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.049649   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.052362   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:02:34.052378   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.052384   29671 round_trippers.go:580]     Audit-Id: e29de84e-cd6e-4394-9958-8bbbd9dc48ac
	I0115 10:02:34.052390   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.052406   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.052417   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.052440   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.052457   29671 round_trippers.go:580]     Content-Length: 291
	I0115 10:02:34.052464   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.052540   29671 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9b737f2-ab4d-4b14-b6f0-b06c44cfcbb8","resourceVersion":"901","creationTimestamp":"2024-01-15T09:50:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0115 10:02:34.052639   29671 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-975382" context rescaled to 1 replicas
	I0115 10:02:34.052670   29671 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 10:02:34.054703   29671 out.go:177] * Verifying Kubernetes components...
	I0115 10:02:34.056052   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:02:34.070411   29671 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:02:34.070662   29671 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 10:02:34.070943   29671 node_ready.go:35] waiting up to 6m0s for node "multinode-975382-m02" to be "Ready" ...
	I0115 10:02:34.071024   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:02:34.071034   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.071041   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.071047   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.073193   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:02:34.073211   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.073222   29671 round_trippers.go:580]     Audit-Id: ad8a9a9f-ce43-4fa2-91ee-51f8c9b02b27
	I0115 10:02:34.073231   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.073239   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.073249   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.073261   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.073267   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.073440   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"27561a41-ede8-4b35-93b8-8e7a61b08b6c","resourceVersion":"1051","creationTimestamp":"2024-01-15T10:02:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T10_02_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T10:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0115 10:02:34.073691   29671 node_ready.go:49] node "multinode-975382-m02" has status "Ready":"True"
	I0115 10:02:34.073708   29671 node_ready.go:38] duration metric: took 2.748167ms waiting for node "multinode-975382-m02" to be "Ready" ...
	I0115 10:02:34.073715   29671 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:02:34.073764   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 10:02:34.073779   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.073796   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.073805   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.077351   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:02:34.077366   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.077373   29671 round_trippers.go:580]     Audit-Id: af064fd4-f309-4c55-b72b-fb827b70b12a
	I0115 10:02:34.077378   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.077383   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.077388   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.077393   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.077400   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.078754   29671 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1058"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"897","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82238 chars]
	I0115 10:02:34.081713   29671 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.081776   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:02:34.081784   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.081791   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.081796   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.083783   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:02:34.083803   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.083811   29671 round_trippers.go:580]     Audit-Id: 2114664e-0f35-4ad4-86d5-1fff1f8c178f
	I0115 10:02:34.083819   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.083826   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.083834   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.083842   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.083850   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.084058   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"897","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0115 10:02:34.084486   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:02:34.084499   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.084506   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.084511   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.086252   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:02:34.086270   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.086279   29671 round_trippers.go:580]     Audit-Id: 8530cec4-5153-43c9-aff9-fd0e8eb71593
	I0115 10:02:34.086287   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.086296   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.086304   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.086313   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.086329   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.086493   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:02:34.086866   29671 pod_ready.go:92] pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace has status "Ready":"True"
	I0115 10:02:34.086888   29671 pod_ready.go:81] duration metric: took 5.157418ms waiting for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.086896   29671 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.086937   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-975382
	I0115 10:02:34.086945   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.086951   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.086957   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.089019   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:02:34.089036   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.089045   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.089055   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.089064   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.089075   29671 round_trippers.go:580]     Audit-Id: cae1d696-d263-49f4-a392-4dc795057d42
	I0115 10:02:34.089092   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.089101   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.089233   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-975382","namespace":"kube-system","uid":"6b8601c3-a366-4171-9221-4b83d091aff7","resourceVersion":"865","creationTimestamp":"2024-01-15T09:50:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.mirror":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.seen":"2024-01-15T09:50:07.549379101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0115 10:02:34.089544   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:02:34.089558   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.089565   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.089572   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.091477   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:02:34.091498   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.091510   29671 round_trippers.go:580]     Audit-Id: 7f0e8a9a-2a0f-491f-b3d2-e8fe69ffab93
	I0115 10:02:34.091519   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.091527   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.091535   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.091543   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.091555   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.091711   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:02:34.091999   29671 pod_ready.go:92] pod "etcd-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:02:34.092013   29671 pod_ready.go:81] duration metric: took 5.111234ms waiting for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.092033   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.092083   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-975382
	I0115 10:02:34.092093   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.092103   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.092114   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.093714   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:02:34.093730   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.093739   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.093747   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.093756   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.093773   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.093782   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.093789   29671 round_trippers.go:580]     Audit-Id: c5cd3cf5-e6a3-4aa5-8eb7-e2fcf65fbdfc
	I0115 10:02:34.094043   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-975382","namespace":"kube-system","uid":"0c174d15-48a9-4394-ba76-207b7cbc42a0","resourceVersion":"873","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.mirror":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.seen":"2024-01-15T09:50:16.415736932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0115 10:02:34.094351   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:02:34.094364   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.094374   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.094382   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.095992   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:02:34.096008   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.096017   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.096025   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.096034   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.096046   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.096057   29671 round_trippers.go:580]     Audit-Id: bb6bea41-b155-4b57-8a80-bbf3118f00f1
	I0115 10:02:34.096065   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.096184   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:02:34.096435   29671 pod_ready.go:92] pod "kube-apiserver-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:02:34.096451   29671 pod_ready.go:81] duration metric: took 4.405403ms waiting for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.096461   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.096511   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-975382
	I0115 10:02:34.096520   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.096530   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.096541   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.098571   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:02:34.098587   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.098595   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.098604   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.098612   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.098623   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.098631   29671 round_trippers.go:580]     Audit-Id: a3266427-5cd5-4ef9-9589-941ffe63d8be
	I0115 10:02:34.098639   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.098936   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-975382","namespace":"kube-system","uid":"0fabcc70-f923-40a7-86b4-70c0cc2213ce","resourceVersion":"887","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.mirror":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.seen":"2024-01-15T09:50:16.415738247Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0115 10:02:34.099234   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:02:34.099248   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.099258   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.099266   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.100791   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:02:34.100806   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.100814   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.100822   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.100830   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.100842   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.100850   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.100866   29671 round_trippers.go:580]     Audit-Id: 362817b3-2f16-4a48-9691-50b42eb79754
	I0115 10:02:34.101058   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:02:34.101400   29671 pod_ready.go:92] pod "kube-controller-manager-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:02:34.101417   29671 pod_ready.go:81] duration metric: took 4.947113ms waiting for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.101427   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fxwtq" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.271648   29671 request.go:629] Waited for 170.169472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:02:34.271713   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:02:34.271720   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.271728   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.271736   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.274775   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:02:34.274797   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.274807   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.274816   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.274825   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.274833   29671 round_trippers.go:580]     Audit-Id: 286bb2bb-08c0-4fec-9e48-af4630fb9075
	I0115 10:02:34.274842   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.274848   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.275210   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fxwtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"54b5ed4b-d227-46d0-b113-85849b0c0700","resourceVersion":"713","creationTimestamp":"2024-01-15T09:51:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0115 10:02:34.471986   29671 request.go:629] Waited for 196.350286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:02:34.472054   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:02:34.472059   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.472066   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.472072   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.474764   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:02:34.474783   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.474792   29671 round_trippers.go:580]     Audit-Id: 534f27c8-7878-40d4-bcee-f16da2e35c4b
	I0115 10:02:34.474801   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.474809   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.474817   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.474825   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.474833   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.475177   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m03","uid":"e8425595-976c-4f6f-8ad3-6cb2de7275fd","resourceVersion":"1052","creationTimestamp":"2024-01-15T09:52:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T10_02_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:52:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I0115 10:02:34.475450   29671 pod_ready.go:92] pod "kube-proxy-fxwtq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:02:34.475466   29671 pod_ready.go:81] duration metric: took 374.029198ms waiting for pod "kube-proxy-fxwtq" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.475481   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.671610   29671 request.go:629] Waited for 196.046745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 10:02:34.671664   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 10:02:34.671670   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.671688   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.671698   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.675012   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:02:34.675032   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.675042   29671 round_trippers.go:580]     Audit-Id: c4e76307-4158-43c2-aeec-dd451b3ca5a9
	I0115 10:02:34.675050   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.675057   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.675065   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.675073   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.675084   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.675221   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgsx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"a779cea9-5532-4d69-9e49-ac2879e028ec","resourceVersion":"827","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0115 10:02:34.872006   29671 request.go:629] Waited for 196.362365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:02:34.872077   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:02:34.872082   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:34.872090   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:34.872096   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:34.875840   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:02:34.875859   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:34.875865   29671 round_trippers.go:580]     Audit-Id: ec032765-e808-4934-b2ac-3c3436f72695
	I0115 10:02:34.875871   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:34.875876   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:34.875881   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:34.875886   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:34.875891   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:34 GMT
	I0115 10:02:34.876673   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:02:34.877007   29671 pod_ready.go:92] pod "kube-proxy-jgsx4" in "kube-system" namespace has status "Ready":"True"
	I0115 10:02:34.877022   29671 pod_ready.go:81] duration metric: took 401.535407ms waiting for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:34.877031   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:35.072045   29671 request.go:629] Waited for 194.943737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 10:02:35.072109   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 10:02:35.072113   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:35.072120   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:35.072126   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:35.074760   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:02:35.074784   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:35.074800   29671 round_trippers.go:580]     Audit-Id: 48e2c9cc-8d27-462b-b217-f261dd53f86e
	I0115 10:02:35.074808   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:35.074814   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:35.074819   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:35.074825   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:35.074831   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:35 GMT
	I0115 10:02:35.075019   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-znv78","generateName":"kube-proxy-","namespace":"kube-system","uid":"bb4d831f-7308-4f44-b944-fdfdf1d583c2","resourceVersion":"1070","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0115 10:02:35.271688   29671 request.go:629] Waited for 196.238862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:02:35.271781   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:02:35.271793   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:35.271804   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:35.271816   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:35.274730   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:02:35.274753   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:35.274762   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:35.274771   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:35 GMT
	I0115 10:02:35.274783   29671 round_trippers.go:580]     Audit-Id: d506a25f-6b67-4690-89ea-e1069a46da8a
	I0115 10:02:35.274796   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:35.274807   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:35.274816   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:35.274978   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"27561a41-ede8-4b35-93b8-8e7a61b08b6c","resourceVersion":"1051","creationTimestamp":"2024-01-15T10:02:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T10_02_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T10:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0115 10:02:35.275233   29671 pod_ready.go:92] pod "kube-proxy-znv78" in "kube-system" namespace has status "Ready":"True"
	I0115 10:02:35.275247   29671 pod_ready.go:81] duration metric: took 398.211018ms waiting for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:35.275256   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:35.471407   29671 request.go:629] Waited for 196.061528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 10:02:35.471481   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 10:02:35.471488   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:35.471502   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:35.471512   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:35.475694   29671 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 10:02:35.475713   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:35.475720   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:35.475726   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:35 GMT
	I0115 10:02:35.475731   29671 round_trippers.go:580]     Audit-Id: a41a2649-d5b3-4aa0-bab5-d4395a22eed3
	I0115 10:02:35.475737   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:35.475748   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:35.475756   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:35.476147   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-975382","namespace":"kube-system","uid":"d7c93aee-4d7c-4264-8d65-de8781105178","resourceVersion":"889","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.mirror":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.seen":"2024-01-15T09:50:16.415739183Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0115 10:02:35.671876   29671 request.go:629] Waited for 195.392455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:02:35.671965   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:02:35.671971   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:35.671979   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:35.671985   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:35.675341   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:02:35.675365   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:35.675377   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:35.675386   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:35.675394   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:35.675403   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:35.675413   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:35 GMT
	I0115 10:02:35.675428   29671 round_trippers.go:580]     Audit-Id: 8049c80a-7fa6-409c-a5c6-3307b5979a3c
	I0115 10:02:35.675656   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:02:35.676141   29671 pod_ready.go:92] pod "kube-scheduler-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:02:35.676168   29671 pod_ready.go:81] duration metric: took 400.90151ms waiting for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:02:35.676181   29671 pod_ready.go:38] duration metric: took 1.602453974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:02:35.676202   29671 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:02:35.676248   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:02:35.689171   29671 system_svc.go:56] duration metric: took 12.962834ms WaitForService to wait for kubelet.
	I0115 10:02:35.689196   29671 kubeadm.go:581] duration metric: took 1.636499103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:02:35.689211   29671 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:02:35.871587   29671 request.go:629] Waited for 182.312838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0115 10:02:35.871663   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0115 10:02:35.871668   29671 round_trippers.go:469] Request Headers:
	I0115 10:02:35.871676   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:02:35.871682   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:02:35.875342   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:02:35.875365   29671 round_trippers.go:577] Response Headers:
	I0115 10:02:35.875386   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:02:35 GMT
	I0115 10:02:35.875398   29671 round_trippers.go:580]     Audit-Id: 67653843-37d3-4bcb-acd5-a6ccd4f883f1
	I0115 10:02:35.875408   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:02:35.875418   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:02:35.875429   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:02:35.875440   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:02:35.875931   29671 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1073"},"items":[{"metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16439 chars]
	I0115 10:02:35.876510   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:02:35.876531   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:02:35.876539   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:02:35.876543   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:02:35.876547   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:02:35.876550   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:02:35.876555   29671 node_conditions.go:105] duration metric: took 187.341209ms to run NodePressure ...
	I0115 10:02:35.876569   29671 start.go:228] waiting for startup goroutines ...
	I0115 10:02:35.876591   29671 start.go:242] writing updated cluster config ...
	I0115 10:02:35.877031   29671 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:02:35.877148   29671 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 10:02:35.880734   29671 out.go:177] * Starting worker node multinode-975382-m03 in cluster multinode-975382
	I0115 10:02:35.882173   29671 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:02:35.882199   29671 cache.go:56] Caching tarball of preloaded images
	I0115 10:02:35.882266   29671 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 10:02:35.882277   29671 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:02:35.882376   29671 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/config.json ...
	I0115 10:02:35.882568   29671 start.go:365] acquiring machines lock for multinode-975382-m03: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:02:35.882612   29671 start.go:369] acquired machines lock for "multinode-975382-m03" in 25.774µs
	I0115 10:02:35.882628   29671 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:02:35.882635   29671 fix.go:54] fixHost starting: m03
	I0115 10:02:35.882871   29671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:02:35.882890   29671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:02:35.897385   29671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39843
	I0115 10:02:35.897731   29671 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:02:35.898200   29671 main.go:141] libmachine: Using API Version  1
	I0115 10:02:35.898224   29671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:02:35.898589   29671 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:02:35.898771   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .DriverName
	I0115 10:02:35.898908   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetState
	I0115 10:02:35.900310   29671 fix.go:102] recreateIfNeeded on multinode-975382-m03: state=Running err=<nil>
	W0115 10:02:35.900324   29671 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:02:35.902444   29671 out.go:177] * Updating the running kvm2 "multinode-975382-m03" VM ...
	I0115 10:02:35.903939   29671 machine.go:88] provisioning docker machine ...
	I0115 10:02:35.903961   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .DriverName
	I0115 10:02:35.904157   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetMachineName
	I0115 10:02:35.904320   29671 buildroot.go:166] provisioning hostname "multinode-975382-m03"
	I0115 10:02:35.904339   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetMachineName
	I0115 10:02:35.904485   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHHostname
	I0115 10:02:35.906737   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:35.907166   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:02:35.907192   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:35.907294   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHPort
	I0115 10:02:35.907442   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:02:35.907562   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:02:35.907714   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHUsername
	I0115 10:02:35.907872   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:02:35.908291   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0115 10:02:35.908312   29671 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-975382-m03 && echo "multinode-975382-m03" | sudo tee /etc/hostname
	I0115 10:02:36.040076   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-975382-m03
	
	I0115 10:02:36.040099   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHHostname
	I0115 10:02:36.042926   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.043275   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:02:36.043298   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.043477   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHPort
	I0115 10:02:36.043656   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:02:36.043817   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:02:36.044007   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHUsername
	I0115 10:02:36.044215   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:02:36.044692   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0115 10:02:36.044721   29671 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-975382-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-975382-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-975382-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:02:36.159024   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:02:36.159058   29671 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:02:36.159078   29671 buildroot.go:174] setting up certificates
	I0115 10:02:36.159085   29671 provision.go:83] configureAuth start
	I0115 10:02:36.159094   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetMachineName
	I0115 10:02:36.159348   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetIP
	I0115 10:02:36.161757   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.162081   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:02:36.162107   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.162242   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHHostname
	I0115 10:02:36.164507   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.164861   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:02:36.164888   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.165017   29671 provision.go:138] copyHostCerts
	I0115 10:02:36.165039   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:02:36.165065   29671 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:02:36.165074   29671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:02:36.165141   29671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:02:36.165208   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:02:36.165227   29671 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:02:36.165233   29671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:02:36.165256   29671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:02:36.165295   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:02:36.165310   29671 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:02:36.165316   29671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:02:36.165336   29671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:02:36.165377   29671 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.multinode-975382-m03 san=[192.168.39.198 192.168.39.198 localhost 127.0.0.1 minikube multinode-975382-m03]
	I0115 10:02:36.339564   29671 provision.go:172] copyRemoteCerts
	I0115 10:02:36.339619   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:02:36.339641   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHHostname
	I0115 10:02:36.342583   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.343002   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:02:36.343035   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.343239   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHPort
	I0115 10:02:36.343413   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:02:36.343594   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHUsername
	I0115 10:02:36.343756   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m03/id_rsa Username:docker}
	I0115 10:02:36.436065   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 10:02:36.436126   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:02:36.460331   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 10:02:36.460398   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0115 10:02:36.485777   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 10:02:36.485833   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 10:02:36.510894   29671 provision.go:86] duration metric: configureAuth took 351.798856ms
	I0115 10:02:36.510920   29671 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:02:36.511181   29671 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:02:36.511262   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHHostname
	I0115 10:02:36.514036   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.514409   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:02:36.514473   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:02:36.514613   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHPort
	I0115 10:02:36.514787   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:02:36.514974   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:02:36.515092   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHUsername
	I0115 10:02:36.515277   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:02:36.515661   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0115 10:02:36.515683   29671 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:04:07.036641   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:04:07.036671   29671 machine.go:91] provisioned docker machine in 1m31.132717426s
	I0115 10:04:07.036685   29671 start.go:300] post-start starting for "multinode-975382-m03" (driver="kvm2")
	I0115 10:04:07.036700   29671 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:04:07.036722   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .DriverName
	I0115 10:04:07.037090   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:04:07.037120   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHHostname
	I0115 10:04:07.039802   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.040119   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:04:07.040134   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.040306   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHPort
	I0115 10:04:07.040492   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:04:07.040684   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHUsername
	I0115 10:04:07.040814   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m03/id_rsa Username:docker}
	I0115 10:04:07.129198   29671 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:04:07.133536   29671 command_runner.go:130] > NAME=Buildroot
	I0115 10:04:07.133564   29671 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0115 10:04:07.133572   29671 command_runner.go:130] > ID=buildroot
	I0115 10:04:07.133580   29671 command_runner.go:130] > VERSION_ID=2021.02.12
	I0115 10:04:07.133589   29671 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0115 10:04:07.133617   29671 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:04:07.133634   29671 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:04:07.133719   29671 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:04:07.133811   29671 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:04:07.133823   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /etc/ssl/certs/134822.pem
	I0115 10:04:07.133929   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:04:07.143055   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:04:07.167196   29671 start.go:303] post-start completed in 130.499001ms
	I0115 10:04:07.167218   29671 fix.go:56] fixHost completed within 1m31.284581585s
	I0115 10:04:07.167244   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHHostname
	I0115 10:04:07.170192   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.170635   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:04:07.170666   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.170835   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHPort
	I0115 10:04:07.171044   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:04:07.171192   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:04:07.171371   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHUsername
	I0115 10:04:07.171544   29671 main.go:141] libmachine: Using SSH client type: native
	I0115 10:04:07.171859   29671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0115 10:04:07.171871   29671 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:04:07.289075   29671 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705313047.281146483
	
	I0115 10:04:07.289100   29671 fix.go:206] guest clock: 1705313047.281146483
	I0115 10:04:07.289109   29671 fix.go:219] Guest: 2024-01-15 10:04:07.281146483 +0000 UTC Remote: 2024-01-15 10:04:07.16722383 +0000 UTC m=+553.745709468 (delta=113.922653ms)
	I0115 10:04:07.289129   29671 fix.go:190] guest clock delta is within tolerance: 113.922653ms
	I0115 10:04:07.289136   29671 start.go:83] releasing machines lock for "multinode-975382-m03", held for 1m31.406514936s
	I0115 10:04:07.289173   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .DriverName
	I0115 10:04:07.289442   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetIP
	I0115 10:04:07.292319   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.292669   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:04:07.292701   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.294662   29671 out.go:177] * Found network options:
	I0115 10:04:07.296830   29671 out.go:177]   - NO_PROXY=192.168.39.217,192.168.39.95
	W0115 10:04:07.298350   29671 proxy.go:119] fail to check proxy env: Error ip not in block
	W0115 10:04:07.298370   29671 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 10:04:07.298382   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .DriverName
	I0115 10:04:07.298923   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .DriverName
	I0115 10:04:07.299091   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .DriverName
	I0115 10:04:07.299201   29671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:04:07.299240   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHHostname
	W0115 10:04:07.299237   29671 proxy.go:119] fail to check proxy env: Error ip not in block
	W0115 10:04:07.299303   29671 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 10:04:07.299361   29671 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:04:07.299381   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHHostname
	I0115 10:04:07.302067   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.302280   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.302471   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:04:07.302518   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.302674   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHPort
	I0115 10:04:07.302696   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:04:07.302736   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:07.302839   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHPort
	I0115 10:04:07.302844   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:04:07.303021   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHUsername
	I0115 10:04:07.303027   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHKeyPath
	I0115 10:04:07.303156   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m03/id_rsa Username:docker}
	I0115 10:04:07.303203   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetSSHUsername
	I0115 10:04:07.303330   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m03/id_rsa Username:docker}
	I0115 10:04:07.445138   29671 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0115 10:04:07.591464   29671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 10:04:07.597296   29671 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0115 10:04:07.597354   29671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:04:07.597438   29671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:04:07.606466   29671 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0115 10:04:07.606485   29671 start.go:475] detecting cgroup driver to use...
	I0115 10:04:07.606541   29671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:04:07.621249   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:04:07.637140   29671 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:04:07.637200   29671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:04:07.658230   29671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:04:07.677870   29671 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:04:07.830699   29671 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:04:07.956459   29671 docker.go:233] disabling docker service ...
	I0115 10:04:07.956534   29671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:04:07.978492   29671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:04:07.990865   29671 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:04:08.114527   29671 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:04:08.249825   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:04:08.263149   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:04:08.282087   29671 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0115 10:04:08.282443   29671 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:04:08.282506   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:04:08.293872   29671 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:04:08.293933   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:04:08.303820   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:04:08.313523   29671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:04:08.323202   29671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:04:08.333170   29671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:04:08.341878   29671 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0115 10:04:08.341945   29671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:04:08.350702   29671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:04:08.493066   29671 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:04:10.420827   29671 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.927722285s)
	I0115 10:04:10.420859   29671 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:04:10.420913   29671 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:04:10.425875   29671 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0115 10:04:10.425897   29671 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0115 10:04:10.425906   29671 command_runner.go:130] > Device: 16h/22d	Inode: 1249        Links: 1
	I0115 10:04:10.425915   29671 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 10:04:10.425924   29671 command_runner.go:130] > Access: 2024-01-15 10:04:10.386764049 +0000
	I0115 10:04:10.425937   29671 command_runner.go:130] > Modify: 2024-01-15 10:04:10.322758201 +0000
	I0115 10:04:10.425946   29671 command_runner.go:130] > Change: 2024-01-15 10:04:10.322758201 +0000
	I0115 10:04:10.425957   29671 command_runner.go:130] >  Birth: -
	I0115 10:04:10.425995   29671 start.go:543] Will wait 60s for crictl version
	I0115 10:04:10.426039   29671 ssh_runner.go:195] Run: which crictl
	I0115 10:04:10.429966   29671 command_runner.go:130] > /usr/bin/crictl
	I0115 10:04:10.430027   29671 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:04:10.466262   29671 command_runner.go:130] > Version:  0.1.0
	I0115 10:04:10.466282   29671 command_runner.go:130] > RuntimeName:  cri-o
	I0115 10:04:10.466289   29671 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0115 10:04:10.466297   29671 command_runner.go:130] > RuntimeApiVersion:  v1
	I0115 10:04:10.467468   29671 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:04:10.467537   29671 ssh_runner.go:195] Run: crio --version
	I0115 10:04:10.514985   29671 command_runner.go:130] > crio version 1.24.1
	I0115 10:04:10.515020   29671 command_runner.go:130] > Version:          1.24.1
	I0115 10:04:10.515066   29671 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 10:04:10.515098   29671 command_runner.go:130] > GitTreeState:     dirty
	I0115 10:04:10.515112   29671 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 10:04:10.515119   29671 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 10:04:10.515123   29671 command_runner.go:130] > Compiler:         gc
	I0115 10:04:10.515131   29671 command_runner.go:130] > Platform:         linux/amd64
	I0115 10:04:10.515137   29671 command_runner.go:130] > Linkmode:         dynamic
	I0115 10:04:10.515146   29671 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 10:04:10.515153   29671 command_runner.go:130] > SeccompEnabled:   true
	I0115 10:04:10.515158   29671 command_runner.go:130] > AppArmorEnabled:  false
	I0115 10:04:10.515256   29671 ssh_runner.go:195] Run: crio --version
	I0115 10:04:10.560884   29671 command_runner.go:130] > crio version 1.24.1
	I0115 10:04:10.560903   29671 command_runner.go:130] > Version:          1.24.1
	I0115 10:04:10.560910   29671 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0115 10:04:10.560914   29671 command_runner.go:130] > GitTreeState:     dirty
	I0115 10:04:10.560920   29671 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0115 10:04:10.560926   29671 command_runner.go:130] > GoVersion:        go1.19.9
	I0115 10:04:10.560930   29671 command_runner.go:130] > Compiler:         gc
	I0115 10:04:10.560935   29671 command_runner.go:130] > Platform:         linux/amd64
	I0115 10:04:10.560940   29671 command_runner.go:130] > Linkmode:         dynamic
	I0115 10:04:10.560948   29671 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 10:04:10.560953   29671 command_runner.go:130] > SeccompEnabled:   true
	I0115 10:04:10.560959   29671 command_runner.go:130] > AppArmorEnabled:  false
	I0115 10:04:10.563290   29671 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:04:10.564819   29671 out.go:177]   - env NO_PROXY=192.168.39.217
	I0115 10:04:10.566255   29671 out.go:177]   - env NO_PROXY=192.168.39.217,192.168.39.95
	I0115 10:04:10.567511   29671 main.go:141] libmachine: (multinode-975382-m03) Calling .GetIP
	I0115 10:04:10.570252   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:10.570671   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:04:18", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:52:33 +0000 UTC Type:0 Mac:52:54:00:55:04:18 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-975382-m03 Clientid:01:52:54:00:55:04:18}
	I0115 10:04:10.570696   29671 main.go:141] libmachine: (multinode-975382-m03) DBG | domain multinode-975382-m03 has defined IP address 192.168.39.198 and MAC address 52:54:00:55:04:18 in network mk-multinode-975382
	I0115 10:04:10.570876   29671 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 10:04:10.574747   29671 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0115 10:04:10.574967   29671 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382 for IP: 192.168.39.198
	I0115 10:04:10.575003   29671 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:04:10.575137   29671 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:04:10.575189   29671 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:04:10.575207   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 10:04:10.575228   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 10:04:10.575246   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 10:04:10.575264   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 10:04:10.575329   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:04:10.575376   29671 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:04:10.575391   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:04:10.575428   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:04:10.575463   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:04:10.575496   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:04:10.575553   29671 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:04:10.575590   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem -> /usr/share/ca-certificates/13482.pem
	I0115 10:04:10.575610   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> /usr/share/ca-certificates/134822.pem
	I0115 10:04:10.575628   29671 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:04:10.575950   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:04:10.602723   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:04:10.627251   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:04:10.652569   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:04:10.675388   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:04:10.697535   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:04:10.719180   29671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:04:10.740964   29671 ssh_runner.go:195] Run: openssl version
	I0115 10:04:10.746094   29671 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0115 10:04:10.746389   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:04:10.755279   29671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:04:10.759364   29671 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:04:10.759531   29671 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:04:10.759569   29671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:04:10.764523   29671 command_runner.go:130] > 51391683
	I0115 10:04:10.764933   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:04:10.773876   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:04:10.784469   29671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:04:10.788619   29671 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:04:10.788906   29671 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:04:10.788947   29671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:04:10.794094   29671 command_runner.go:130] > 3ec20f2e
	I0115 10:04:10.794149   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:04:10.801931   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:04:10.811267   29671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:04:10.815854   29671 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:04:10.816345   29671 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:04:10.816384   29671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:04:10.822319   29671 command_runner.go:130] > b5213941
	I0115 10:04:10.822385   29671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:04:10.831008   29671 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:04:10.835330   29671 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 10:04:10.835509   29671 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 10:04:10.835595   29671 ssh_runner.go:195] Run: crio config
	I0115 10:04:10.883732   29671 command_runner.go:130] ! time="2024-01-15 10:04:10.876016371Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0115 10:04:10.883767   29671 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0115 10:04:10.889506   29671 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0115 10:04:10.889533   29671 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0115 10:04:10.889544   29671 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0115 10:04:10.889549   29671 command_runner.go:130] > #
	I0115 10:04:10.889560   29671 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0115 10:04:10.889571   29671 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0115 10:04:10.889580   29671 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0115 10:04:10.889593   29671 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0115 10:04:10.889599   29671 command_runner.go:130] > # reload'.
	I0115 10:04:10.889609   29671 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0115 10:04:10.889620   29671 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0115 10:04:10.889634   29671 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0115 10:04:10.889647   29671 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0115 10:04:10.889656   29671 command_runner.go:130] > [crio]
	I0115 10:04:10.889668   29671 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0115 10:04:10.889678   29671 command_runner.go:130] > # containers images, in this directory.
	I0115 10:04:10.889686   29671 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0115 10:04:10.889698   29671 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0115 10:04:10.889709   29671 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0115 10:04:10.889719   29671 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0115 10:04:10.889733   29671 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0115 10:04:10.889740   29671 command_runner.go:130] > storage_driver = "overlay"
	I0115 10:04:10.889753   29671 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0115 10:04:10.889765   29671 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0115 10:04:10.889775   29671 command_runner.go:130] > storage_option = [
	I0115 10:04:10.889783   29671 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0115 10:04:10.889794   29671 command_runner.go:130] > ]
	I0115 10:04:10.889807   29671 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0115 10:04:10.889819   29671 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0115 10:04:10.889830   29671 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0115 10:04:10.889842   29671 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0115 10:04:10.889855   29671 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0115 10:04:10.889865   29671 command_runner.go:130] > # always happen on a node reboot
	I0115 10:04:10.889876   29671 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0115 10:04:10.889888   29671 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0115 10:04:10.889900   29671 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0115 10:04:10.889915   29671 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0115 10:04:10.889926   29671 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0115 10:04:10.889940   29671 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0115 10:04:10.889965   29671 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0115 10:04:10.889971   29671 command_runner.go:130] > # internal_wipe = true
	I0115 10:04:10.889977   29671 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0115 10:04:10.889982   29671 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0115 10:04:10.889988   29671 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0115 10:04:10.889993   29671 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0115 10:04:10.889999   29671 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0115 10:04:10.890002   29671 command_runner.go:130] > [crio.api]
	I0115 10:04:10.890008   29671 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0115 10:04:10.890013   29671 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0115 10:04:10.890018   29671 command_runner.go:130] > # IP address on which the stream server will listen.
	I0115 10:04:10.890025   29671 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0115 10:04:10.890032   29671 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0115 10:04:10.890039   29671 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0115 10:04:10.890043   29671 command_runner.go:130] > # stream_port = "0"
	I0115 10:04:10.890051   29671 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0115 10:04:10.890055   29671 command_runner.go:130] > # stream_enable_tls = false
	I0115 10:04:10.890061   29671 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0115 10:04:10.890067   29671 command_runner.go:130] > # stream_idle_timeout = ""
	I0115 10:04:10.890074   29671 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0115 10:04:10.890082   29671 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0115 10:04:10.890088   29671 command_runner.go:130] > # minutes.
	I0115 10:04:10.890093   29671 command_runner.go:130] > # stream_tls_cert = ""
	I0115 10:04:10.890102   29671 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0115 10:04:10.890109   29671 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0115 10:04:10.890115   29671 command_runner.go:130] > # stream_tls_key = ""
	I0115 10:04:10.890121   29671 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0115 10:04:10.890127   29671 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0115 10:04:10.890132   29671 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0115 10:04:10.890139   29671 command_runner.go:130] > # stream_tls_ca = ""
	I0115 10:04:10.890146   29671 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 10:04:10.890153   29671 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0115 10:04:10.890160   29671 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 10:04:10.890166   29671 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0115 10:04:10.890182   29671 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0115 10:04:10.890190   29671 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0115 10:04:10.890194   29671 command_runner.go:130] > [crio.runtime]
	I0115 10:04:10.890200   29671 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0115 10:04:10.890208   29671 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0115 10:04:10.890215   29671 command_runner.go:130] > # "nofile=1024:2048"
	I0115 10:04:10.890222   29671 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0115 10:04:10.890228   29671 command_runner.go:130] > # default_ulimits = [
	I0115 10:04:10.890232   29671 command_runner.go:130] > # ]
	I0115 10:04:10.890238   29671 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0115 10:04:10.890244   29671 command_runner.go:130] > # no_pivot = false
	I0115 10:04:10.890250   29671 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0115 10:04:10.890258   29671 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0115 10:04:10.890265   29671 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0115 10:04:10.890271   29671 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0115 10:04:10.890278   29671 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0115 10:04:10.890284   29671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 10:04:10.890291   29671 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0115 10:04:10.890295   29671 command_runner.go:130] > # Cgroup setting for conmon
	I0115 10:04:10.890304   29671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0115 10:04:10.890310   29671 command_runner.go:130] > conmon_cgroup = "pod"
	I0115 10:04:10.890316   29671 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0115 10:04:10.890324   29671 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0115 10:04:10.890330   29671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 10:04:10.890336   29671 command_runner.go:130] > conmon_env = [
	I0115 10:04:10.890342   29671 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0115 10:04:10.890355   29671 command_runner.go:130] > ]
	I0115 10:04:10.890360   29671 command_runner.go:130] > # Additional environment variables to set for all the
	I0115 10:04:10.890365   29671 command_runner.go:130] > # containers. These are overridden if set in the
	I0115 10:04:10.890372   29671 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0115 10:04:10.890376   29671 command_runner.go:130] > # default_env = [
	I0115 10:04:10.890379   29671 command_runner.go:130] > # ]
	I0115 10:04:10.890384   29671 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0115 10:04:10.890388   29671 command_runner.go:130] > # selinux = false
	I0115 10:04:10.890394   29671 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0115 10:04:10.890400   29671 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0115 10:04:10.890405   29671 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0115 10:04:10.890409   29671 command_runner.go:130] > # seccomp_profile = ""
	I0115 10:04:10.890426   29671 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0115 10:04:10.890436   29671 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0115 10:04:10.890445   29671 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0115 10:04:10.890455   29671 command_runner.go:130] > # which might increase security.
	I0115 10:04:10.890460   29671 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0115 10:04:10.890468   29671 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0115 10:04:10.890477   29671 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0115 10:04:10.890483   29671 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0115 10:04:10.890495   29671 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0115 10:04:10.890503   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:04:10.890507   29671 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0115 10:04:10.890515   29671 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0115 10:04:10.890520   29671 command_runner.go:130] > # the cgroup blockio controller.
	I0115 10:04:10.890527   29671 command_runner.go:130] > # blockio_config_file = ""
	I0115 10:04:10.890534   29671 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0115 10:04:10.890540   29671 command_runner.go:130] > # irqbalance daemon.
	I0115 10:04:10.890546   29671 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0115 10:04:10.890554   29671 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0115 10:04:10.890561   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:04:10.890565   29671 command_runner.go:130] > # rdt_config_file = ""
	I0115 10:04:10.890573   29671 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0115 10:04:10.890578   29671 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0115 10:04:10.890586   29671 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0115 10:04:10.890595   29671 command_runner.go:130] > # separate_pull_cgroup = ""
	I0115 10:04:10.890609   29671 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0115 10:04:10.890622   29671 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0115 10:04:10.890629   29671 command_runner.go:130] > # will be added.
	I0115 10:04:10.890639   29671 command_runner.go:130] > # default_capabilities = [
	I0115 10:04:10.890649   29671 command_runner.go:130] > # 	"CHOWN",
	I0115 10:04:10.890659   29671 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0115 10:04:10.890669   29671 command_runner.go:130] > # 	"FSETID",
	I0115 10:04:10.890679   29671 command_runner.go:130] > # 	"FOWNER",
	I0115 10:04:10.890688   29671 command_runner.go:130] > # 	"SETGID",
	I0115 10:04:10.890695   29671 command_runner.go:130] > # 	"SETUID",
	I0115 10:04:10.890704   29671 command_runner.go:130] > # 	"SETPCAP",
	I0115 10:04:10.890713   29671 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0115 10:04:10.890719   29671 command_runner.go:130] > # 	"KILL",
	I0115 10:04:10.890729   29671 command_runner.go:130] > # ]
	I0115 10:04:10.890741   29671 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0115 10:04:10.890750   29671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 10:04:10.890758   29671 command_runner.go:130] > # default_sysctls = [
	I0115 10:04:10.890762   29671 command_runner.go:130] > # ]
	I0115 10:04:10.890769   29671 command_runner.go:130] > # List of devices on the host that a
	I0115 10:04:10.890776   29671 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0115 10:04:10.890782   29671 command_runner.go:130] > # allowed_devices = [
	I0115 10:04:10.890787   29671 command_runner.go:130] > # 	"/dev/fuse",
	I0115 10:04:10.890792   29671 command_runner.go:130] > # ]
	I0115 10:04:10.890797   29671 command_runner.go:130] > # List of additional devices. specified as
	I0115 10:04:10.890807   29671 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0115 10:04:10.890812   29671 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0115 10:04:10.890828   29671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 10:04:10.890835   29671 command_runner.go:130] > # additional_devices = [
	I0115 10:04:10.890838   29671 command_runner.go:130] > # ]
	I0115 10:04:10.890846   29671 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0115 10:04:10.890850   29671 command_runner.go:130] > # cdi_spec_dirs = [
	I0115 10:04:10.890855   29671 command_runner.go:130] > # 	"/etc/cdi",
	I0115 10:04:10.890859   29671 command_runner.go:130] > # 	"/var/run/cdi",
	I0115 10:04:10.890865   29671 command_runner.go:130] > # ]
	I0115 10:04:10.890871   29671 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0115 10:04:10.890880   29671 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0115 10:04:10.890887   29671 command_runner.go:130] > # Defaults to false.
	I0115 10:04:10.890892   29671 command_runner.go:130] > # device_ownership_from_security_context = false
	I0115 10:04:10.890903   29671 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0115 10:04:10.890912   29671 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0115 10:04:10.890919   29671 command_runner.go:130] > # hooks_dir = [
	I0115 10:04:10.890924   29671 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0115 10:04:10.890929   29671 command_runner.go:130] > # ]
	I0115 10:04:10.890935   29671 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0115 10:04:10.890944   29671 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0115 10:04:10.890949   29671 command_runner.go:130] > # its default mounts from the following two files:
	I0115 10:04:10.890954   29671 command_runner.go:130] > #
	I0115 10:04:10.890961   29671 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0115 10:04:10.890969   29671 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0115 10:04:10.890977   29671 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0115 10:04:10.890980   29671 command_runner.go:130] > #
	I0115 10:04:10.890986   29671 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0115 10:04:10.890995   29671 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0115 10:04:10.891004   29671 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0115 10:04:10.891009   29671 command_runner.go:130] > #      only add mounts it finds in this file.
	I0115 10:04:10.891015   29671 command_runner.go:130] > #
	I0115 10:04:10.891019   29671 command_runner.go:130] > # default_mounts_file = ""
	I0115 10:04:10.891026   29671 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0115 10:04:10.891033   29671 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0115 10:04:10.891039   29671 command_runner.go:130] > pids_limit = 1024
	I0115 10:04:10.891045   29671 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0115 10:04:10.891053   29671 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0115 10:04:10.891061   29671 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0115 10:04:10.891072   29671 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0115 10:04:10.891078   29671 command_runner.go:130] > # log_size_max = -1
	I0115 10:04:10.891085   29671 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0115 10:04:10.891091   29671 command_runner.go:130] > # log_to_journald = false
	I0115 10:04:10.891098   29671 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0115 10:04:10.891105   29671 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0115 10:04:10.891110   29671 command_runner.go:130] > # Path to directory for container attach sockets.
	I0115 10:04:10.891117   29671 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0115 10:04:10.891125   29671 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0115 10:04:10.891131   29671 command_runner.go:130] > # bind_mount_prefix = ""
	I0115 10:04:10.891137   29671 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0115 10:04:10.891143   29671 command_runner.go:130] > # read_only = false
	I0115 10:04:10.891149   29671 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0115 10:04:10.891158   29671 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0115 10:04:10.891162   29671 command_runner.go:130] > # live configuration reload.
	I0115 10:04:10.891169   29671 command_runner.go:130] > # log_level = "info"
	I0115 10:04:10.891175   29671 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0115 10:04:10.891182   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:04:10.891186   29671 command_runner.go:130] > # log_filter = ""
	I0115 10:04:10.891196   29671 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0115 10:04:10.891204   29671 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0115 10:04:10.891211   29671 command_runner.go:130] > # separated by comma.
	I0115 10:04:10.891215   29671 command_runner.go:130] > # uid_mappings = ""
	I0115 10:04:10.891223   29671 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0115 10:04:10.891229   29671 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0115 10:04:10.891235   29671 command_runner.go:130] > # separated by comma.
	I0115 10:04:10.891239   29671 command_runner.go:130] > # gid_mappings = ""
	I0115 10:04:10.891245   29671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0115 10:04:10.891253   29671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 10:04:10.891261   29671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 10:04:10.891268   29671 command_runner.go:130] > # minimum_mappable_uid = -1
	I0115 10:04:10.891275   29671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0115 10:04:10.891283   29671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 10:04:10.891291   29671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 10:04:10.891296   29671 command_runner.go:130] > # minimum_mappable_gid = -1
	I0115 10:04:10.891304   29671 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0115 10:04:10.891312   29671 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0115 10:04:10.891317   29671 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0115 10:04:10.891322   29671 command_runner.go:130] > # ctr_stop_timeout = 30
	I0115 10:04:10.891328   29671 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0115 10:04:10.891338   29671 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0115 10:04:10.891345   29671 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0115 10:04:10.891350   29671 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0115 10:04:10.891357   29671 command_runner.go:130] > drop_infra_ctr = false
	I0115 10:04:10.891364   29671 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0115 10:04:10.891372   29671 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0115 10:04:10.891379   29671 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0115 10:04:10.891386   29671 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0115 10:04:10.891392   29671 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0115 10:04:10.891399   29671 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0115 10:04:10.891406   29671 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0115 10:04:10.891415   29671 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0115 10:04:10.891419   29671 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0115 10:04:10.891428   29671 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0115 10:04:10.891436   29671 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0115 10:04:10.891444   29671 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0115 10:04:10.891451   29671 command_runner.go:130] > # default_runtime = "runc"
	I0115 10:04:10.891457   29671 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0115 10:04:10.891467   29671 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0115 10:04:10.891478   29671 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0115 10:04:10.891485   29671 command_runner.go:130] > # creation as a file is not desired either.
	I0115 10:04:10.891497   29671 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0115 10:04:10.891506   29671 command_runner.go:130] > # the hostname is being managed dynamically.
	I0115 10:04:10.891510   29671 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0115 10:04:10.891516   29671 command_runner.go:130] > # ]
	I0115 10:04:10.891523   29671 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0115 10:04:10.891531   29671 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0115 10:04:10.891540   29671 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0115 10:04:10.891546   29671 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0115 10:04:10.891552   29671 command_runner.go:130] > #
	I0115 10:04:10.891556   29671 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0115 10:04:10.891564   29671 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0115 10:04:10.891568   29671 command_runner.go:130] > #  runtime_type = "oci"
	I0115 10:04:10.891574   29671 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0115 10:04:10.891580   29671 command_runner.go:130] > #  privileged_without_host_devices = false
	I0115 10:04:10.891584   29671 command_runner.go:130] > #  allowed_annotations = []
	I0115 10:04:10.891592   29671 command_runner.go:130] > # Where:
	I0115 10:04:10.891601   29671 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0115 10:04:10.891614   29671 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0115 10:04:10.891627   29671 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0115 10:04:10.891641   29671 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0115 10:04:10.891651   29671 command_runner.go:130] > #   in $PATH.
	I0115 10:04:10.891663   29671 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0115 10:04:10.891675   29671 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0115 10:04:10.891688   29671 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0115 10:04:10.891696   29671 command_runner.go:130] > #   state.
	I0115 10:04:10.891703   29671 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0115 10:04:10.891711   29671 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0115 10:04:10.891719   29671 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0115 10:04:10.891727   29671 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0115 10:04:10.891733   29671 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0115 10:04:10.891742   29671 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0115 10:04:10.891750   29671 command_runner.go:130] > #   The currently recognized values are:
	I0115 10:04:10.891757   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0115 10:04:10.891766   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0115 10:04:10.891774   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0115 10:04:10.891780   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0115 10:04:10.891790   29671 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0115 10:04:10.891798   29671 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0115 10:04:10.891807   29671 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0115 10:04:10.891813   29671 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0115 10:04:10.891818   29671 command_runner.go:130] > #   should be moved to the container's cgroup
	I0115 10:04:10.891825   29671 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0115 10:04:10.891830   29671 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0115 10:04:10.891836   29671 command_runner.go:130] > runtime_type = "oci"
	I0115 10:04:10.891840   29671 command_runner.go:130] > runtime_root = "/run/runc"
	I0115 10:04:10.891847   29671 command_runner.go:130] > runtime_config_path = ""
	I0115 10:04:10.891851   29671 command_runner.go:130] > monitor_path = ""
	I0115 10:04:10.891857   29671 command_runner.go:130] > monitor_cgroup = ""
	I0115 10:04:10.891862   29671 command_runner.go:130] > monitor_exec_cgroup = ""
	I0115 10:04:10.891871   29671 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0115 10:04:10.891875   29671 command_runner.go:130] > # running containers
	I0115 10:04:10.891882   29671 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0115 10:04:10.891888   29671 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0115 10:04:10.891911   29671 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0115 10:04:10.891919   29671 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0115 10:04:10.891925   29671 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0115 10:04:10.891932   29671 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0115 10:04:10.891938   29671 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0115 10:04:10.891944   29671 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0115 10:04:10.891949   29671 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0115 10:04:10.891954   29671 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0115 10:04:10.891962   29671 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0115 10:04:10.891970   29671 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0115 10:04:10.891976   29671 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0115 10:04:10.891986   29671 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0115 10:04:10.891995   29671 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0115 10:04:10.892003   29671 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0115 10:04:10.892012   29671 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0115 10:04:10.892021   29671 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0115 10:04:10.892027   29671 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0115 10:04:10.892036   29671 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0115 10:04:10.892040   29671 command_runner.go:130] > # Example:
	I0115 10:04:10.892045   29671 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0115 10:04:10.892050   29671 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0115 10:04:10.892057   29671 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0115 10:04:10.892062   29671 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0115 10:04:10.892068   29671 command_runner.go:130] > # cpuset = 0
	I0115 10:04:10.892072   29671 command_runner.go:130] > # cpushares = "0-1"
	I0115 10:04:10.892078   29671 command_runner.go:130] > # Where:
	I0115 10:04:10.892083   29671 command_runner.go:130] > # The workload name is workload-type.
	I0115 10:04:10.892093   29671 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0115 10:04:10.892101   29671 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0115 10:04:10.892107   29671 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0115 10:04:10.892116   29671 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0115 10:04:10.892124   29671 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0115 10:04:10.892128   29671 command_runner.go:130] > # 
	I0115 10:04:10.892134   29671 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0115 10:04:10.892139   29671 command_runner.go:130] > #
	I0115 10:04:10.892145   29671 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0115 10:04:10.892153   29671 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0115 10:04:10.892161   29671 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0115 10:04:10.892170   29671 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0115 10:04:10.892178   29671 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0115 10:04:10.892182   29671 command_runner.go:130] > [crio.image]
	I0115 10:04:10.892190   29671 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0115 10:04:10.892194   29671 command_runner.go:130] > # default_transport = "docker://"
	I0115 10:04:10.892203   29671 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0115 10:04:10.892209   29671 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0115 10:04:10.892215   29671 command_runner.go:130] > # global_auth_file = ""
	I0115 10:04:10.892220   29671 command_runner.go:130] > # The image used to instantiate infra containers.
	I0115 10:04:10.892226   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:04:10.892233   29671 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0115 10:04:10.892239   29671 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0115 10:04:10.892247   29671 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0115 10:04:10.892252   29671 command_runner.go:130] > # This option supports live configuration reload.
	I0115 10:04:10.892258   29671 command_runner.go:130] > # pause_image_auth_file = ""
	I0115 10:04:10.892264   29671 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0115 10:04:10.892273   29671 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0115 10:04:10.892282   29671 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0115 10:04:10.892287   29671 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0115 10:04:10.892294   29671 command_runner.go:130] > # pause_command = "/pause"
	I0115 10:04:10.892301   29671 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0115 10:04:10.892310   29671 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0115 10:04:10.892316   29671 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0115 10:04:10.892322   29671 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0115 10:04:10.892329   29671 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0115 10:04:10.892334   29671 command_runner.go:130] > # signature_policy = ""
	I0115 10:04:10.892340   29671 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0115 10:04:10.892349   29671 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0115 10:04:10.892355   29671 command_runner.go:130] > # changing them here.
	I0115 10:04:10.892359   29671 command_runner.go:130] > # insecure_registries = [
	I0115 10:04:10.892365   29671 command_runner.go:130] > # ]
	I0115 10:04:10.892372   29671 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0115 10:04:10.892380   29671 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0115 10:04:10.892384   29671 command_runner.go:130] > # image_volumes = "mkdir"
	I0115 10:04:10.892392   29671 command_runner.go:130] > # Temporary directory to use for storing big files
	I0115 10:04:10.892396   29671 command_runner.go:130] > # big_files_temporary_dir = ""
	I0115 10:04:10.892405   29671 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0115 10:04:10.892411   29671 command_runner.go:130] > # CNI plugins.
	I0115 10:04:10.892415   29671 command_runner.go:130] > [crio.network]
	I0115 10:04:10.892421   29671 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0115 10:04:10.892429   29671 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0115 10:04:10.892433   29671 command_runner.go:130] > # cni_default_network = ""
	I0115 10:04:10.892441   29671 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0115 10:04:10.892446   29671 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0115 10:04:10.892454   29671 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0115 10:04:10.892458   29671 command_runner.go:130] > # plugin_dirs = [
	I0115 10:04:10.892464   29671 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0115 10:04:10.892468   29671 command_runner.go:130] > # ]
	I0115 10:04:10.892475   29671 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0115 10:04:10.892481   29671 command_runner.go:130] > [crio.metrics]
	I0115 10:04:10.892486   29671 command_runner.go:130] > # Globally enable or disable metrics support.
	I0115 10:04:10.892496   29671 command_runner.go:130] > enable_metrics = true
	I0115 10:04:10.892500   29671 command_runner.go:130] > # Specify enabled metrics collectors.
	I0115 10:04:10.892506   29671 command_runner.go:130] > # Per default all metrics are enabled.
	I0115 10:04:10.892515   29671 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0115 10:04:10.892522   29671 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0115 10:04:10.892531   29671 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0115 10:04:10.892536   29671 command_runner.go:130] > # metrics_collectors = [
	I0115 10:04:10.892540   29671 command_runner.go:130] > # 	"operations",
	I0115 10:04:10.892546   29671 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0115 10:04:10.892553   29671 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0115 10:04:10.892558   29671 command_runner.go:130] > # 	"operations_errors",
	I0115 10:04:10.892564   29671 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0115 10:04:10.892569   29671 command_runner.go:130] > # 	"image_pulls_by_name",
	I0115 10:04:10.892575   29671 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0115 10:04:10.892580   29671 command_runner.go:130] > # 	"image_pulls_failures",
	I0115 10:04:10.892586   29671 command_runner.go:130] > # 	"image_pulls_successes",
	I0115 10:04:10.892592   29671 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0115 10:04:10.892600   29671 command_runner.go:130] > # 	"image_layer_reuse",
	I0115 10:04:10.892610   29671 command_runner.go:130] > # 	"containers_oom_total",
	I0115 10:04:10.892620   29671 command_runner.go:130] > # 	"containers_oom",
	I0115 10:04:10.892630   29671 command_runner.go:130] > # 	"processes_defunct",
	I0115 10:04:10.892640   29671 command_runner.go:130] > # 	"operations_total",
	I0115 10:04:10.892648   29671 command_runner.go:130] > # 	"operations_latency_seconds",
	I0115 10:04:10.892656   29671 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0115 10:04:10.892663   29671 command_runner.go:130] > # 	"operations_errors_total",
	I0115 10:04:10.892673   29671 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0115 10:04:10.892683   29671 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0115 10:04:10.892688   29671 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0115 10:04:10.892694   29671 command_runner.go:130] > # 	"image_pulls_success_total",
	I0115 10:04:10.892699   29671 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0115 10:04:10.892705   29671 command_runner.go:130] > # 	"containers_oom_count_total",
	I0115 10:04:10.892709   29671 command_runner.go:130] > # ]
	I0115 10:04:10.892715   29671 command_runner.go:130] > # The port on which the metrics server will listen.
	I0115 10:04:10.892722   29671 command_runner.go:130] > # metrics_port = 9090
	I0115 10:04:10.892727   29671 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0115 10:04:10.892734   29671 command_runner.go:130] > # metrics_socket = ""
	I0115 10:04:10.892739   29671 command_runner.go:130] > # The certificate for the secure metrics server.
	I0115 10:04:10.892748   29671 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0115 10:04:10.892757   29671 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0115 10:04:10.892765   29671 command_runner.go:130] > # certificate on any modification event.
	I0115 10:04:10.892769   29671 command_runner.go:130] > # metrics_cert = ""
	I0115 10:04:10.892774   29671 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0115 10:04:10.892779   29671 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0115 10:04:10.892786   29671 command_runner.go:130] > # metrics_key = ""
	I0115 10:04:10.892791   29671 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0115 10:04:10.892796   29671 command_runner.go:130] > [crio.tracing]
	I0115 10:04:10.892801   29671 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0115 10:04:10.892808   29671 command_runner.go:130] > # enable_tracing = false
	I0115 10:04:10.892813   29671 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0115 10:04:10.892817   29671 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0115 10:04:10.892824   29671 command_runner.go:130] > # Number of samples to collect per million spans.
	I0115 10:04:10.892830   29671 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0115 10:04:10.892836   29671 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0115 10:04:10.892842   29671 command_runner.go:130] > [crio.stats]
	I0115 10:04:10.892848   29671 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0115 10:04:10.892855   29671 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0115 10:04:10.892859   29671 command_runner.go:130] > # stats_collection_period = 0
	I0115 10:04:10.892920   29671 cni.go:84] Creating CNI manager for ""
	I0115 10:04:10.892928   29671 cni.go:136] 3 nodes found, recommending kindnet
	I0115 10:04:10.892937   29671 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:04:10.892953   29671 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-975382 NodeName:multinode-975382-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:04:10.893049   29671 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-975382-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:04:10.893097   29671 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-975382-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:04:10.893142   29671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:04:10.901911   29671 command_runner.go:130] > kubeadm
	I0115 10:04:10.901930   29671 command_runner.go:130] > kubectl
	I0115 10:04:10.901936   29671 command_runner.go:130] > kubelet
	I0115 10:04:10.902025   29671 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:04:10.902084   29671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0115 10:04:10.911047   29671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0115 10:04:10.927073   29671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:04:10.942773   29671 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0115 10:04:10.946490   29671 command_runner.go:130] > 192.168.39.217	control-plane.minikube.internal
	I0115 10:04:10.946631   29671 host.go:66] Checking if "multinode-975382" exists ...
	I0115 10:04:10.946923   29671 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:04:10.947050   29671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:04:10.947098   29671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:04:10.961627   29671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0115 10:04:10.962039   29671 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:04:10.962492   29671 main.go:141] libmachine: Using API Version  1
	I0115 10:04:10.962513   29671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:04:10.962870   29671 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:04:10.963075   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:04:10.963206   29671 start.go:304] JoinCluster: &{Name:multinode-975382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-975382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.95 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:04:10.963331   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0115 10:04:10.963352   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:04:10.966509   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:04:10.966985   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:04:10.967013   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:04:10.967143   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:04:10.967328   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:04:10.967491   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:04:10.967650   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 10:04:11.131012   29671 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2h1r19.f3eb2speigaz4su0 --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 10:04:11.131265   29671 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0115 10:04:11.131312   29671 host.go:66] Checking if "multinode-975382" exists ...
	I0115 10:04:11.131742   29671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:04:11.131794   29671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:04:11.145894   29671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0115 10:04:11.146266   29671 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:04:11.146748   29671 main.go:141] libmachine: Using API Version  1
	I0115 10:04:11.146769   29671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:04:11.147068   29671 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:04:11.147269   29671 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 10:04:11.147419   29671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-975382-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0115 10:04:11.147436   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 10:04:11.150269   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:04:11.150678   29671 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 10:04:11.150704   29671 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 10:04:11.150832   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 10:04:11.150983   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 10:04:11.151132   29671 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 10:04:11.151244   29671 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 10:04:11.300749   29671 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0115 10:04:11.357793   29671 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-q2p7k, kube-system/kube-proxy-fxwtq
	I0115 10:04:14.375334   29671 command_runner.go:130] > node/multinode-975382-m03 cordoned
	I0115 10:04:14.375364   29671 command_runner.go:130] > pod "busybox-5bc68d56bd-bsnlw" has DeletionTimestamp older than 1 seconds, skipping
	I0115 10:04:14.375373   29671 command_runner.go:130] > node/multinode-975382-m03 drained
	I0115 10:04:14.375400   29671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-975382-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.227958922s)
	I0115 10:04:14.375421   29671 node.go:108] successfully drained node "m03"
	I0115 10:04:14.375899   29671 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:04:14.376168   29671 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 10:04:14.376405   29671 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0115 10:04:14.376448   29671 round_trippers.go:463] DELETE https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:04:14.376455   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:14.376464   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:14.376474   29671 round_trippers.go:473]     Content-Type: application/json
	I0115 10:04:14.376482   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:14.390539   29671 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0115 10:04:14.390555   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:14.390562   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:14.390567   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:14.390573   29671 round_trippers.go:580]     Content-Length: 171
	I0115 10:04:14.390578   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:14 GMT
	I0115 10:04:14.390585   29671 round_trippers.go:580]     Audit-Id: a329aadc-adf5-4741-a401-57b2bf8b119b
	I0115 10:04:14.390592   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:14.390600   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:14.390703   29671 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-975382-m03","kind":"nodes","uid":"e8425595-976c-4f6f-8ad3-6cb2de7275fd"}}
	I0115 10:04:14.390744   29671 node.go:124] successfully deleted node "m03"
	I0115 10:04:14.390755   29671 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0115 10:04:14.390777   29671 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0115 10:04:14.390806   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2h1r19.f3eb2speigaz4su0 --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-975382-m03"
	I0115 10:04:14.444215   29671 command_runner.go:130] > [preflight] Running pre-flight checks
	I0115 10:04:14.605115   29671 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0115 10:04:14.605150   29671 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0115 10:04:14.673368   29671 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 10:04:14.673399   29671 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 10:04:14.673407   29671 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0115 10:04:14.835572   29671 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0115 10:04:15.357884   29671 command_runner.go:130] > This node has joined the cluster:
	I0115 10:04:15.357909   29671 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0115 10:04:15.357916   29671 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0115 10:04:15.357923   29671 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0115 10:04:15.360565   29671 command_runner.go:130] ! W0115 10:04:14.436436    2375 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0115 10:04:15.360596   29671 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0115 10:04:15.360608   29671 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0115 10:04:15.360623   29671 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0115 10:04:15.360754   29671 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0115 10:04:15.648581   29671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=multinode-975382 minikube.k8s.io/updated_at=2024_01_15T10_04_15_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:04:15.751721   29671 command_runner.go:130] > node/multinode-975382-m02 labeled
	I0115 10:04:15.766530   29671 command_runner.go:130] > node/multinode-975382-m03 labeled
	I0115 10:04:15.769012   29671 start.go:306] JoinCluster complete in 4.805803589s
	I0115 10:04:15.769035   29671 cni.go:84] Creating CNI manager for ""
	I0115 10:04:15.769043   29671 cni.go:136] 3 nodes found, recommending kindnet
	I0115 10:04:15.769098   29671 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 10:04:15.776709   29671 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0115 10:04:15.776731   29671 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0115 10:04:15.776738   29671 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0115 10:04:15.776747   29671 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 10:04:15.776756   29671 command_runner.go:130] > Access: 2024-01-15 10:00:04.443236172 +0000
	I0115 10:04:15.776768   29671 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0115 10:04:15.776779   29671 command_runner.go:130] > Change: 2024-01-15 10:00:02.526236172 +0000
	I0115 10:04:15.776788   29671 command_runner.go:130] >  Birth: -
	I0115 10:04:15.776896   29671 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 10:04:15.776918   29671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 10:04:15.795292   29671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 10:04:16.154361   29671 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0115 10:04:16.154381   29671 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0115 10:04:16.154387   29671 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0115 10:04:16.154392   29671 command_runner.go:130] > daemonset.apps/kindnet configured
	I0115 10:04:16.154810   29671 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:04:16.155022   29671 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 10:04:16.155322   29671 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 10:04:16.155336   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.155343   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.155349   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.158666   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:04:16.158680   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.158687   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.158692   29671 round_trippers.go:580]     Content-Length: 291
	I0115 10:04:16.158698   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.158703   29671 round_trippers.go:580]     Audit-Id: 8420a24e-c929-413f-989e-934b31472a92
	I0115 10:04:16.158709   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.158717   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.158725   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.158750   29671 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9b737f2-ab4d-4b14-b6f0-b06c44cfcbb8","resourceVersion":"901","creationTimestamp":"2024-01-15T09:50:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0115 10:04:16.158849   29671 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-975382" context rescaled to 1 replicas
	I0115 10:04:16.158875   29671 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0115 10:04:16.160929   29671 out.go:177] * Verifying Kubernetes components...
	I0115 10:04:16.162367   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:04:16.177695   29671 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:04:16.177940   29671 kapi.go:59] client config for multinode-975382: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/profiles/multinode-975382/client.key", CAFile:"/home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 10:04:16.178187   29671 node_ready.go:35] waiting up to 6m0s for node "multinode-975382-m03" to be "Ready" ...
	I0115 10:04:16.178261   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:04:16.178272   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.178283   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.178295   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.180623   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:16.180636   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.180642   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.180647   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.180653   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.180659   29671 round_trippers.go:580]     Audit-Id: da7f4fa5-2d4a-40fe-89c1-ad6be9990453
	I0115 10:04:16.180664   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.180669   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.180840   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m03","uid":"42c6c84b-15b4-407e-835c-8a395cf3ad2a","resourceVersion":"1230","creationTimestamp":"2024-01-15T10:04:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T10_04_15_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T10:04:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0115 10:04:16.181087   29671 node_ready.go:49] node "multinode-975382-m03" has status "Ready":"True"
	I0115 10:04:16.181099   29671 node_ready.go:38] duration metric: took 2.896457ms waiting for node "multinode-975382-m03" to be "Ready" ...
	I0115 10:04:16.181106   29671 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:04:16.181153   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0115 10:04:16.181160   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.181167   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.181176   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.184836   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:04:16.184852   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.184861   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.184869   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.184877   29671 round_trippers.go:580]     Audit-Id: 21db38e4-13eb-41c1-becb-9b3da25ece61
	I0115 10:04:16.184890   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.184899   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.184910   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.185898   29671 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1236"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"897","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82079 chars]
	I0115 10:04:16.188320   29671 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:16.188389   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n2sqg
	I0115 10:04:16.188400   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.188410   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.188416   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.190689   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:16.190706   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.190712   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.190718   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.190722   29671 round_trippers.go:580]     Audit-Id: 71762f2a-8e6d-483a-b48f-1ca2b667be57
	I0115 10:04:16.190728   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.190736   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.190747   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.190954   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n2sqg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f303a63a-c959-477e-89d5-c35bd0802b1b","resourceVersion":"897","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f42bdb79-a21b-4d3e-b9d3-f0788b1de87f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0115 10:04:16.191355   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:04:16.191368   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.191375   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.191381   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.194574   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:04:16.194587   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.194593   29671 round_trippers.go:580]     Audit-Id: fe584e1f-2997-4da0-9618-9881c5deeac6
	I0115 10:04:16.194603   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.194611   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.194618   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.194625   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.194636   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.194842   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:04:16.195129   29671 pod_ready.go:92] pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace has status "Ready":"True"
	I0115 10:04:16.195143   29671 pod_ready.go:81] duration metric: took 6.804152ms waiting for pod "coredns-5dd5756b68-n2sqg" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:16.195150   29671 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:16.195193   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-975382
	I0115 10:04:16.195200   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.195207   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.195212   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.200659   29671 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 10:04:16.200673   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.200679   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.200684   29671 round_trippers.go:580]     Audit-Id: f562b21b-c7d9-4df9-ae22-2c8e80e38fb2
	I0115 10:04:16.200689   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.200695   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.200704   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.200713   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.200889   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-975382","namespace":"kube-system","uid":"6b8601c3-a366-4171-9221-4b83d091aff7","resourceVersion":"865","creationTimestamp":"2024-01-15T09:50:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.mirror":"2cb63d0e596a024d1a6f045abe90bff6","kubernetes.io/config.seen":"2024-01-15T09:50:07.549379101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0115 10:04:16.201239   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:04:16.201252   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.201258   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.201264   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.205441   29671 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 10:04:16.205454   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.205460   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.205465   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.205470   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.205476   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.205481   29671 round_trippers.go:580]     Audit-Id: 071f906e-bb2a-4ae0-bc64-0344da7ebc69
	I0115 10:04:16.205485   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.205761   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:04:16.206023   29671 pod_ready.go:92] pod "etcd-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:04:16.206037   29671 pod_ready.go:81] duration metric: took 10.881386ms waiting for pod "etcd-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:16.206051   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:16.206091   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-975382
	I0115 10:04:16.206100   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.206109   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.206117   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.207805   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:04:16.207817   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.207823   29671 round_trippers.go:580]     Audit-Id: 51be18e7-838b-48e0-8c2e-a9b4637023fb
	I0115 10:04:16.207828   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.207838   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.207849   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.207860   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.207869   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.208035   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-975382","namespace":"kube-system","uid":"0c174d15-48a9-4394-ba76-207b7cbc42a0","resourceVersion":"873","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.mirror":"638704967c86b61fc474d50d411fc862","kubernetes.io/config.seen":"2024-01-15T09:50:16.415736932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0115 10:04:16.208385   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:04:16.208398   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.208404   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.208410   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.209966   29671 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 10:04:16.209977   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.209982   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.209987   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.209992   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.209997   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.210002   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.210011   29671 round_trippers.go:580]     Audit-Id: 36e862fc-7754-4be6-a8df-90baef20ece3
	I0115 10:04:16.210253   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:04:16.210561   29671 pod_ready.go:92] pod "kube-apiserver-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:04:16.210575   29671 pod_ready.go:81] duration metric: took 4.517858ms waiting for pod "kube-apiserver-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:16.210582   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:16.210633   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-975382
	I0115 10:04:16.210643   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.210653   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.210665   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.213529   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:16.213544   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.213552   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.213560   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.213567   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.213580   29671 round_trippers.go:580]     Audit-Id: 4cac7f3c-2559-40ef-8a6e-23ec50b951f0
	I0115 10:04:16.213591   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.213599   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.213777   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-975382","namespace":"kube-system","uid":"0fabcc70-f923-40a7-86b4-70c0cc2213ce","resourceVersion":"887","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.mirror":"1a6b49eaacd27748d82a7a1330e13424","kubernetes.io/config.seen":"2024-01-15T09:50:16.415738247Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0115 10:04:16.214120   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:04:16.214133   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.214140   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.214146   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.216449   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:16.216461   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.216466   29671 round_trippers.go:580]     Audit-Id: ed53da2d-2b6e-437b-91ea-063585c0cce6
	I0115 10:04:16.216473   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.216481   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.216488   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.216495   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.216503   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.216706   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:04:16.216964   29671 pod_ready.go:92] pod "kube-controller-manager-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:04:16.216977   29671 pod_ready.go:81] duration metric: took 6.389388ms waiting for pod "kube-controller-manager-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:16.216985   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fxwtq" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:16.378278   29671 request.go:629] Waited for 161.245962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:04:16.378357   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:04:16.378369   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.378380   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.378390   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.381358   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:16.381384   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.381393   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.381399   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.381404   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.381409   29671 round_trippers.go:580]     Audit-Id: 26798d4b-33ad-4a80-b32b-560c2fe6df1e
	I0115 10:04:16.381414   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.381419   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.381654   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fxwtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"54b5ed4b-d227-46d0-b113-85849b0c0700","resourceVersion":"1233","creationTimestamp":"2024-01-15T09:51:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0115 10:04:16.578465   29671 request.go:629] Waited for 196.408904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:04:16.578529   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:04:16.578536   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.578548   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.578558   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.580860   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:16.580874   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.580880   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.580885   29671 round_trippers.go:580]     Audit-Id: 14cee98a-42df-4ac8-b786-524931903534
	I0115 10:04:16.580892   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.580902   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.580910   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.580919   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.581085   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m03","uid":"42c6c84b-15b4-407e-835c-8a395cf3ad2a","resourceVersion":"1230","creationTimestamp":"2024-01-15T10:04:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T10_04_15_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T10:04:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0115 10:04:16.778664   29671 request.go:629] Waited for 61.216953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:04:16.778729   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:04:16.778737   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.778751   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.778765   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.781874   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:04:16.781895   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.781902   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.781907   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.781914   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.781923   29671 round_trippers.go:580]     Audit-Id: b522a30c-799a-4ecd-93df-83f24c0f9609
	I0115 10:04:16.781939   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.781946   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.782372   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fxwtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"54b5ed4b-d227-46d0-b113-85849b0c0700","resourceVersion":"1233","creationTimestamp":"2024-01-15T09:51:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0115 10:04:16.979161   29671 request.go:629] Waited for 196.284693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:04:16.979224   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:04:16.979230   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:16.979238   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:16.979244   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:16.982888   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:04:16.982912   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:16.982922   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:16.982941   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:16.982947   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:16.982952   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:16.982958   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:16 GMT
	I0115 10:04:16.982966   29671 round_trippers.go:580]     Audit-Id: 8bf8d602-c5f7-4de2-9681-0a5b0ac5d17b
	I0115 10:04:16.983305   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m03","uid":"42c6c84b-15b4-407e-835c-8a395cf3ad2a","resourceVersion":"1230","creationTimestamp":"2024-01-15T10:04:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T10_04_15_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T10:04:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0115 10:04:17.217768   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fxwtq
	I0115 10:04:17.217790   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:17.217798   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:17.217804   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:17.220413   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:17.220448   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:17.220455   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:17.220461   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:17.220466   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:17 GMT
	I0115 10:04:17.220471   29671 round_trippers.go:580]     Audit-Id: 5fa81cfb-6132-49ea-a3a9-85e9669ba6a5
	I0115 10:04:17.220476   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:17.220481   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:17.221023   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fxwtq","generateName":"kube-proxy-","namespace":"kube-system","uid":"54b5ed4b-d227-46d0-b113-85849b0c0700","resourceVersion":"1245","creationTimestamp":"2024-01-15T09:51:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0115 10:04:17.378846   29671 request.go:629] Waited for 157.327937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:04:17.378921   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m03
	I0115 10:04:17.378926   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:17.378943   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:17.378952   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:17.382006   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:04:17.382024   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:17.382034   29671 round_trippers.go:580]     Audit-Id: 9af2d6c2-62c9-45ac-a08b-ee48aa8ecbc8
	I0115 10:04:17.382042   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:17.382051   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:17.382070   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:17.382081   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:17.382093   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:17 GMT
	I0115 10:04:17.382450   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m03","uid":"42c6c84b-15b4-407e-835c-8a395cf3ad2a","resourceVersion":"1230","creationTimestamp":"2024-01-15T10:04:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T10_04_15_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T10:04:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0115 10:04:17.382723   29671 pod_ready.go:92] pod "kube-proxy-fxwtq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:04:17.382740   29671 pod_ready.go:81] duration metric: took 1.165747942s waiting for pod "kube-proxy-fxwtq" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:17.382749   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:17.578916   29671 request.go:629] Waited for 196.105488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 10:04:17.578984   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgsx4
	I0115 10:04:17.578990   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:17.579000   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:17.579009   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:17.582510   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:04:17.582593   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:17.582620   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:17.582626   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:17.582635   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:17.582645   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:17 GMT
	I0115 10:04:17.582653   29671 round_trippers.go:580]     Audit-Id: b150b53d-6a9e-4f93-a678-0c02052e8943
	I0115 10:04:17.582658   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:17.583172   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgsx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"a779cea9-5532-4d69-9e49-ac2879e028ec","resourceVersion":"827","creationTimestamp":"2024-01-15T09:50:28Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0115 10:04:17.778961   29671 request.go:629] Waited for 195.340386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:04:17.779017   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:04:17.779022   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:17.779032   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:17.779041   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:17.781661   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:17.781683   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:17.781692   29671 round_trippers.go:580]     Audit-Id: b56b97bc-cef8-4f71-939b-62b6957f07c2
	I0115 10:04:17.781701   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:17.781718   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:17.781730   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:17.781741   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:17.781752   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:17 GMT
	I0115 10:04:17.782082   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:04:17.782516   29671 pod_ready.go:92] pod "kube-proxy-jgsx4" in "kube-system" namespace has status "Ready":"True"
	I0115 10:04:17.782534   29671 pod_ready.go:81] duration metric: took 399.767685ms waiting for pod "kube-proxy-jgsx4" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:17.782544   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:17.978498   29671 request.go:629] Waited for 195.894719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 10:04:17.978567   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-znv78
	I0115 10:04:17.978573   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:17.978584   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:17.978593   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:17.981834   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:04:17.981858   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:17.981868   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:17.981877   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:17 GMT
	I0115 10:04:17.981885   29671 round_trippers.go:580]     Audit-Id: 4ed8e2ce-be6f-4cf3-a64c-abd833fbf9c7
	I0115 10:04:17.981893   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:17.981902   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:17.981913   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:17.982176   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-znv78","generateName":"kube-proxy-","namespace":"kube-system","uid":"bb4d831f-7308-4f44-b944-fdfdf1d583c2","resourceVersion":"1070","creationTimestamp":"2024-01-15T09:51:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"93bf4fba-66a6-4108-a547-85160e0f6382","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:51:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93bf4fba-66a6-4108-a547-85160e0f6382\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0115 10:04:18.178988   29671 request.go:629] Waited for 196.344417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:04:18.179058   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382-m02
	I0115 10:04:18.179075   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:18.179083   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:18.179089   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:18.184535   29671 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 10:04:18.184555   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:18.184562   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:18.184567   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:18.184572   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:18.184580   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:18.184585   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:18 GMT
	I0115 10:04:18.184590   29671 round_trippers.go:580]     Audit-Id: b04e2b2c-6731-468b-85bd-ab10e9b07ffe
	I0115 10:04:18.185518   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382-m02","uid":"27561a41-ede8-4b35-93b8-8e7a61b08b6c","resourceVersion":"1229","creationTimestamp":"2024-01-15T10:02:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T10_04_15_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T10:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0115 10:04:18.185770   29671 pod_ready.go:92] pod "kube-proxy-znv78" in "kube-system" namespace has status "Ready":"True"
	I0115 10:04:18.185784   29671 pod_ready.go:81] duration metric: took 403.234032ms waiting for pod "kube-proxy-znv78" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:18.185792   29671 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:18.378896   29671 request.go:629] Waited for 193.044174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 10:04:18.378984   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-975382
	I0115 10:04:18.378995   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:18.379008   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:18.379034   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:18.382287   29671 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 10:04:18.382308   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:18.382317   29671 round_trippers.go:580]     Audit-Id: bbab8044-113d-4bcc-81bd-9eaee920ba4c
	I0115 10:04:18.382325   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:18.382333   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:18.382341   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:18.382347   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:18.382356   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:18 GMT
	I0115 10:04:18.382782   29671 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-975382","namespace":"kube-system","uid":"d7c93aee-4d7c-4264-8d65-de8781105178","resourceVersion":"889","creationTimestamp":"2024-01-15T09:50:16Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.mirror":"c61deabbad0762e4c988c95c1d9d34bc","kubernetes.io/config.seen":"2024-01-15T09:50:16.415739183Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0115 10:04:18.579164   29671 request.go:629] Waited for 196.04825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:04:18.579233   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-975382
	I0115 10:04:18.579238   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:18.579245   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:18.579251   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:18.582194   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:18.582209   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:18.582215   29671 round_trippers.go:580]     Audit-Id: 9f555a12-3f3e-41b0-85a8-07c96dc2820e
	I0115 10:04:18.582220   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:18.582225   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:18.582230   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:18.582235   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:18.582240   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:18 GMT
	I0115 10:04:18.582865   29671 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:50:12Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0115 10:04:18.583162   29671 pod_ready.go:92] pod "kube-scheduler-multinode-975382" in "kube-system" namespace has status "Ready":"True"
	I0115 10:04:18.583175   29671 pod_ready.go:81] duration metric: took 397.377754ms waiting for pod "kube-scheduler-multinode-975382" in "kube-system" namespace to be "Ready" ...
	I0115 10:04:18.583185   29671 pod_ready.go:38] duration metric: took 2.402067858s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:04:18.583196   29671 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:04:18.583238   29671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:04:18.597381   29671 system_svc.go:56] duration metric: took 14.177964ms WaitForService to wait for kubelet.
	I0115 10:04:18.597408   29671 kubeadm.go:581] duration metric: took 2.438512328s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:04:18.597429   29671 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:04:18.778410   29671 request.go:629] Waited for 180.908742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0115 10:04:18.778498   29671 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0115 10:04:18.778503   29671 round_trippers.go:469] Request Headers:
	I0115 10:04:18.778510   29671 round_trippers.go:473]     Accept: application/json, */*
	I0115 10:04:18.778516   29671 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 10:04:18.780876   29671 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 10:04:18.780895   29671 round_trippers.go:577] Response Headers:
	I0115 10:04:18.780904   29671 round_trippers.go:580]     Date: Mon, 15 Jan 2024 10:04:18 GMT
	I0115 10:04:18.780913   29671 round_trippers.go:580]     Audit-Id: dd4e6487-17e4-4686-b139-5658a44c6de8
	I0115 10:04:18.780921   29671 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 10:04:18.780929   29671 round_trippers.go:580]     Content-Type: application/json
	I0115 10:04:18.780937   29671 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8e50bba-bf04-4dd6-b708-f6a3c038d03b
	I0115 10:04:18.780946   29671 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 65450d71-8e3e-40a8-bffc-c5de276cc038
	I0115 10:04:18.781486   29671 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1248"},"items":[{"metadata":{"name":"multinode-975382","uid":"637b27fc-4c62-49e1-b9ef-dac5230e6b18","resourceVersion":"917","creationTimestamp":"2024-01-15T09:50:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-975382","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-975382","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_50_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16467 chars]
	I0115 10:04:18.782038   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:04:18.782055   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:04:18.782063   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:04:18.782068   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:04:18.782071   29671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:04:18.782074   29671 node_conditions.go:123] node cpu capacity is 2
	I0115 10:04:18.782078   29671 node_conditions.go:105] duration metric: took 184.644592ms to run NodePressure ...
	I0115 10:04:18.782087   29671 start.go:228] waiting for startup goroutines ...
	I0115 10:04:18.782107   29671 start.go:242] writing updated cluster config ...
	I0115 10:04:18.782364   29671 ssh_runner.go:195] Run: rm -f paused
	I0115 10:04:18.828298   29671 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:04:18.830607   29671 out.go:177] * Done! kubectl is now configured to use "multinode-975382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 10:00:03 UTC, ends at Mon 2024-01-15 10:04:20 UTC. --
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.894761668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0117bf67-d31c-4aa9-acc4-2d73a306a327 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.894968057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdbfe911d11bb4f94afd172c8e81ffe5aed999142b581c1a31ac9fe74f8f53d,PodSandboxId:280261ecad49f1c48683d417aa05c705d0bebf9671ec9528914275dd6dc261f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705312867182831939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddac1ac5171f1b7a829330ade44f6eb26e35968d81e3ac1e8d847ce1a830bf43,PodSandboxId:3e841d6cb060740080c3809a6edfcc6e05d7f2fd492df7f3d7c458c8e3846fd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705312853457917467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h2lk5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38f4390b-b4e4-467a-87f2-d4d4fc36cd18,},Annotations:map[string]string{io.kubernetes.container.hash: 7de5bf82,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24470295ae8b08abf2c2cddd338fc84002867917ba05b245f7d5db76cdbe7a2b,PodSandboxId:a677b47eed1bc8f540fd330b5e5ef014cee8c729c3638184c0229981883e4036,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705312852727484275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n2sqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f303a63a-c959-477e-89d5-c35bd0802b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c52efb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71532763a383283fb3ff8da72c4e98d6055499d1f943a9352fc9f61d1e9a0b3,PodSandboxId:03dbbeac5141e3f0412f5541bdb28650f8f9c370880cae855c252e876c0d46b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705312839337933192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7tf97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3b9e470b-af37-44cd-8402-6ec9b3340058,},Annotations:map[string]string{io.kubernetes.container.hash: b907eda5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8251f99cdb8a7317e7b02252ad8fcb5321266b5fa48cd40a428e92ecd1da8f,PodSandboxId:280261ecad49f1c48683d417aa05c705d0bebf9671ec9528914275dd6dc261f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705312836818169225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6638d1b848849ca204601706f298a41acb7aa6be3d039e66c94776dcea3d336,PodSandboxId:f39fc258cd2802454125f177f6268f474d6a505606ed46dcdbb6fc2588c5aa6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705312836802442590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jgsx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779cea9-5532-4d69-9e49-ac2879e0
28ec,},Annotations:map[string]string{io.kubernetes.container.hash: ad693185,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d984d25305f6ce4daa0fe1947cc03bd843cfb0f96b420f76c90f034d386cfbdf,PodSandboxId:bbc77e622e1990edcfea581750f82a3fe04705b505003a5bab4088ca5110bc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705312830435662914,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb63d0e596a024d1a6f045abe90bff6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7e22dc87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d7a52a6b2daed9c929af487567ab26c0e5389e88c79eff9f26453d772b272,PodSandboxId:11aeeb8f84db30ff989cda850a47065d1aaaa3b886f6dc4f54b8d4fecb5c98a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705312830307235082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61deabbad0762e4c988c95c1d9d34bc,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bad3af2943ad663b35cf06c3c091a33ff99f46490e15b1c266e5de1a37a3d5,PodSandboxId:0861a72137440f9c6062b5cd43ef4966b717592381db4e3dd5e760a4b53bc10e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705312829955231123,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638704967c86b61fc474d50d411fc862,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:164aecf883783ec7423bc48f1a8e40c567eed7cb18bb83b02d9ff8b5450725df,PodSandboxId:20ff152d2444725b110aca60590936e8d5ed9581a105e8ef9b4b356367270f73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705312829866803029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6b49eaacd27748d82a7a1330e13424,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0117bf67-d31c-4aa9-acc4-2d73a306a327 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.913149516Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="go-grpc-middleware/chain.go:25" id=801ea3fb-88e0-4bc0-88b2-b498c016de2f name=/runtime.v1.RuntimeService/Version
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.913370362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=801ea3fb-88e0-4bc0-88b2-b498c016de2f name=/runtime.v1.RuntimeService/Version
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.950409442Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e938414c-0254-41a0-93d2-3911793fa88d name=/runtime.v1.RuntimeService/Version
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.950533504Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e938414c-0254-41a0-93d2-3911793fa88d name=/runtime.v1.RuntimeService/Version
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.952340945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1880f00d-698d-4ec6-bb3a-e83aba23c297 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.952811590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705313059952796703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1880f00d-698d-4ec6-bb3a-e83aba23c297 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.953526081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=67b438be-c47c-4758-8dbe-714c5a4ae64c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.953624358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=67b438be-c47c-4758-8dbe-714c5a4ae64c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:19 multinode-975382 crio[714]: time="2024-01-15 10:04:19.953969656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdbfe911d11bb4f94afd172c8e81ffe5aed999142b581c1a31ac9fe74f8f53d,PodSandboxId:280261ecad49f1c48683d417aa05c705d0bebf9671ec9528914275dd6dc261f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705312867182831939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddac1ac5171f1b7a829330ade44f6eb26e35968d81e3ac1e8d847ce1a830bf43,PodSandboxId:3e841d6cb060740080c3809a6edfcc6e05d7f2fd492df7f3d7c458c8e3846fd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705312853457917467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h2lk5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38f4390b-b4e4-467a-87f2-d4d4fc36cd18,},Annotations:map[string]string{io.kubernetes.container.hash: 7de5bf82,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24470295ae8b08abf2c2cddd338fc84002867917ba05b245f7d5db76cdbe7a2b,PodSandboxId:a677b47eed1bc8f540fd330b5e5ef014cee8c729c3638184c0229981883e4036,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705312852727484275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n2sqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f303a63a-c959-477e-89d5-c35bd0802b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c52efb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71532763a383283fb3ff8da72c4e98d6055499d1f943a9352fc9f61d1e9a0b3,PodSandboxId:03dbbeac5141e3f0412f5541bdb28650f8f9c370880cae855c252e876c0d46b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705312839337933192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7tf97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3b9e470b-af37-44cd-8402-6ec9b3340058,},Annotations:map[string]string{io.kubernetes.container.hash: b907eda5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8251f99cdb8a7317e7b02252ad8fcb5321266b5fa48cd40a428e92ecd1da8f,PodSandboxId:280261ecad49f1c48683d417aa05c705d0bebf9671ec9528914275dd6dc261f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705312836818169225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6638d1b848849ca204601706f298a41acb7aa6be3d039e66c94776dcea3d336,PodSandboxId:f39fc258cd2802454125f177f6268f474d6a505606ed46dcdbb6fc2588c5aa6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705312836802442590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jgsx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779cea9-5532-4d69-9e49-ac2879e0
28ec,},Annotations:map[string]string{io.kubernetes.container.hash: ad693185,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d984d25305f6ce4daa0fe1947cc03bd843cfb0f96b420f76c90f034d386cfbdf,PodSandboxId:bbc77e622e1990edcfea581750f82a3fe04705b505003a5bab4088ca5110bc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705312830435662914,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb63d0e596a024d1a6f045abe90bff6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7e22dc87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d7a52a6b2daed9c929af487567ab26c0e5389e88c79eff9f26453d772b272,PodSandboxId:11aeeb8f84db30ff989cda850a47065d1aaaa3b886f6dc4f54b8d4fecb5c98a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705312830307235082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61deabbad0762e4c988c95c1d9d34bc,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bad3af2943ad663b35cf06c3c091a33ff99f46490e15b1c266e5de1a37a3d5,PodSandboxId:0861a72137440f9c6062b5cd43ef4966b717592381db4e3dd5e760a4b53bc10e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705312829955231123,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638704967c86b61fc474d50d411fc862,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:164aecf883783ec7423bc48f1a8e40c567eed7cb18bb83b02d9ff8b5450725df,PodSandboxId:20ff152d2444725b110aca60590936e8d5ed9581a105e8ef9b4b356367270f73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705312829866803029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6b49eaacd27748d82a7a1330e13424,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=67b438be-c47c-4758-8dbe-714c5a4ae64c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.004414248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9914cedc-f5b9-4b57-ae9c-ed265c4fb1ac name=/runtime.v1.RuntimeService/Version
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.004507239Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9914cedc-f5b9-4b57-ae9c-ed265c4fb1ac name=/runtime.v1.RuntimeService/Version
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.009779147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e1872caf-dc38-4869-8f00-ddbf8e275d2f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.010211257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705313060010197814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e1872caf-dc38-4869-8f00-ddbf8e275d2f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.011130514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=386aeaed-cbe2-41a9-bc7d-1ed70349ae34 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.011211440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=386aeaed-cbe2-41a9-bc7d-1ed70349ae34 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.011455691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdbfe911d11bb4f94afd172c8e81ffe5aed999142b581c1a31ac9fe74f8f53d,PodSandboxId:280261ecad49f1c48683d417aa05c705d0bebf9671ec9528914275dd6dc261f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705312867182831939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddac1ac5171f1b7a829330ade44f6eb26e35968d81e3ac1e8d847ce1a830bf43,PodSandboxId:3e841d6cb060740080c3809a6edfcc6e05d7f2fd492df7f3d7c458c8e3846fd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705312853457917467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h2lk5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38f4390b-b4e4-467a-87f2-d4d4fc36cd18,},Annotations:map[string]string{io.kubernetes.container.hash: 7de5bf82,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24470295ae8b08abf2c2cddd338fc84002867917ba05b245f7d5db76cdbe7a2b,PodSandboxId:a677b47eed1bc8f540fd330b5e5ef014cee8c729c3638184c0229981883e4036,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705312852727484275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n2sqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f303a63a-c959-477e-89d5-c35bd0802b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c52efb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71532763a383283fb3ff8da72c4e98d6055499d1f943a9352fc9f61d1e9a0b3,PodSandboxId:03dbbeac5141e3f0412f5541bdb28650f8f9c370880cae855c252e876c0d46b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705312839337933192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7tf97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3b9e470b-af37-44cd-8402-6ec9b3340058,},Annotations:map[string]string{io.kubernetes.container.hash: b907eda5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8251f99cdb8a7317e7b02252ad8fcb5321266b5fa48cd40a428e92ecd1da8f,PodSandboxId:280261ecad49f1c48683d417aa05c705d0bebf9671ec9528914275dd6dc261f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705312836818169225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6638d1b848849ca204601706f298a41acb7aa6be3d039e66c94776dcea3d336,PodSandboxId:f39fc258cd2802454125f177f6268f474d6a505606ed46dcdbb6fc2588c5aa6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705312836802442590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jgsx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779cea9-5532-4d69-9e49-ac2879e0
28ec,},Annotations:map[string]string{io.kubernetes.container.hash: ad693185,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d984d25305f6ce4daa0fe1947cc03bd843cfb0f96b420f76c90f034d386cfbdf,PodSandboxId:bbc77e622e1990edcfea581750f82a3fe04705b505003a5bab4088ca5110bc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705312830435662914,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb63d0e596a024d1a6f045abe90bff6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7e22dc87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d7a52a6b2daed9c929af487567ab26c0e5389e88c79eff9f26453d772b272,PodSandboxId:11aeeb8f84db30ff989cda850a47065d1aaaa3b886f6dc4f54b8d4fecb5c98a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705312830307235082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61deabbad0762e4c988c95c1d9d34bc,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bad3af2943ad663b35cf06c3c091a33ff99f46490e15b1c266e5de1a37a3d5,PodSandboxId:0861a72137440f9c6062b5cd43ef4966b717592381db4e3dd5e760a4b53bc10e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705312829955231123,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638704967c86b61fc474d50d411fc862,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:164aecf883783ec7423bc48f1a8e40c567eed7cb18bb83b02d9ff8b5450725df,PodSandboxId:20ff152d2444725b110aca60590936e8d5ed9581a105e8ef9b4b356367270f73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705312829866803029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6b49eaacd27748d82a7a1330e13424,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=386aeaed-cbe2-41a9-bc7d-1ed70349ae34 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.068773747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=90d183d2-c48d-41d7-b8c5-9019b49cfec0 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.068898196Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=90d183d2-c48d-41d7-b8c5-9019b49cfec0 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.070787175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=872d6341-e7ea-4548-8b7c-ed4435d3697d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.071304957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705313060071231822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=872d6341-e7ea-4548-8b7c-ed4435d3697d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.071860778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=601bf1db-cf72-4d37-9352-0c7459787795 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.071960102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=601bf1db-cf72-4d37-9352-0c7459787795 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:04:20 multinode-975382 crio[714]: time="2024-01-15 10:04:20.072183242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdbfe911d11bb4f94afd172c8e81ffe5aed999142b581c1a31ac9fe74f8f53d,PodSandboxId:280261ecad49f1c48683d417aa05c705d0bebf9671ec9528914275dd6dc261f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705312867182831939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddac1ac5171f1b7a829330ade44f6eb26e35968d81e3ac1e8d847ce1a830bf43,PodSandboxId:3e841d6cb060740080c3809a6edfcc6e05d7f2fd492df7f3d7c458c8e3846fd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705312853457917467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h2lk5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38f4390b-b4e4-467a-87f2-d4d4fc36cd18,},Annotations:map[string]string{io.kubernetes.container.hash: 7de5bf82,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24470295ae8b08abf2c2cddd338fc84002867917ba05b245f7d5db76cdbe7a2b,PodSandboxId:a677b47eed1bc8f540fd330b5e5ef014cee8c729c3638184c0229981883e4036,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705312852727484275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n2sqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f303a63a-c959-477e-89d5-c35bd0802b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c52efb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71532763a383283fb3ff8da72c4e98d6055499d1f943a9352fc9f61d1e9a0b3,PodSandboxId:03dbbeac5141e3f0412f5541bdb28650f8f9c370880cae855c252e876c0d46b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705312839337933192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7tf97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 3b9e470b-af37-44cd-8402-6ec9b3340058,},Annotations:map[string]string{io.kubernetes.container.hash: b907eda5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8251f99cdb8a7317e7b02252ad8fcb5321266b5fa48cd40a428e92ecd1da8f,PodSandboxId:280261ecad49f1c48683d417aa05c705d0bebf9671ec9528914275dd6dc261f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705312836818169225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: b8eb636d-31de-4a7e-a296-a66493d5a827,},Annotations:map[string]string{io.kubernetes.container.hash: 5f1cf093,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6638d1b848849ca204601706f298a41acb7aa6be3d039e66c94776dcea3d336,PodSandboxId:f39fc258cd2802454125f177f6268f474d6a505606ed46dcdbb6fc2588c5aa6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705312836802442590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jgsx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779cea9-5532-4d69-9e49-ac2879e0
28ec,},Annotations:map[string]string{io.kubernetes.container.hash: ad693185,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d984d25305f6ce4daa0fe1947cc03bd843cfb0f96b420f76c90f034d386cfbdf,PodSandboxId:bbc77e622e1990edcfea581750f82a3fe04705b505003a5bab4088ca5110bc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705312830435662914,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb63d0e596a024d1a6f045abe90bff6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7e22dc87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d7a52a6b2daed9c929af487567ab26c0e5389e88c79eff9f26453d772b272,PodSandboxId:11aeeb8f84db30ff989cda850a47065d1aaaa3b886f6dc4f54b8d4fecb5c98a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705312830307235082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61deabbad0762e4c988c95c1d9d34bc,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bad3af2943ad663b35cf06c3c091a33ff99f46490e15b1c266e5de1a37a3d5,PodSandboxId:0861a72137440f9c6062b5cd43ef4966b717592381db4e3dd5e760a4b53bc10e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705312829955231123,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638704967c86b61fc474d50d411fc862,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:164aecf883783ec7423bc48f1a8e40c567eed7cb18bb83b02d9ff8b5450725df,PodSandboxId:20ff152d2444725b110aca60590936e8d5ed9581a105e8ef9b4b356367270f73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705312829866803029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-975382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6b49eaacd27748d82a7a1330e13424,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=601bf1db-cf72-4d37-9352-0c7459787795 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4fdbfe911d11b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   280261ecad49f       storage-provisioner
	ddac1ac5171f1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   3e841d6cb0607       busybox-5bc68d56bd-h2lk5
	24470295ae8b0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   a677b47eed1bc       coredns-5dd5756b68-n2sqg
	b71532763a383       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   03dbbeac5141e       kindnet-7tf97
	bd8251f99cdb8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   280261ecad49f       storage-provisioner
	f6638d1b84884       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   f39fc258cd280       kube-proxy-jgsx4
	d984d25305f6c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   bbc77e622e199       etcd-multinode-975382
	a60d7a52a6b2d       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   11aeeb8f84db3       kube-scheduler-multinode-975382
	f9bad3af2943a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   0861a72137440       kube-apiserver-multinode-975382
	164aecf883783       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   20ff152d24447       kube-controller-manager-multinode-975382
	
	
	==> coredns [24470295ae8b08abf2c2cddd338fc84002867917ba05b245f7d5db76cdbe7a2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49550 - 37816 "HINFO IN 8020001470253810202.4527565654126325143. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035491606s
	
	
	==> describe nodes <==
	Name:               multinode-975382
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-975382
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=multinode-975382
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T09_50_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 09:50:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-975382
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 10:04:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:01:06 +0000   Mon, 15 Jan 2024 09:50:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:01:06 +0000   Mon, 15 Jan 2024 09:50:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:01:06 +0000   Mon, 15 Jan 2024 09:50:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:01:06 +0000   Mon, 15 Jan 2024 10:00:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    multinode-975382
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa52c9a1c9b14ad8aa1f708bd3b23c5b
	  System UUID:                aa52c9a1-c9b1-4ad8-aa1f-708bd3b23c5b
	  Boot ID:                    6c1c5044-957f-478c-a9d5-24cc62224f22
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-h2lk5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-n2sqg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-975382                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-7tf97                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-975382             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-975382    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-jgsx4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-975382             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-975382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-975382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-975382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-975382 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-975382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-975382 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-975382 event: Registered Node multinode-975382 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-975382 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m52s)  kubelet          Node multinode-975382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m52s)  kubelet          Node multinode-975382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m52s)  kubelet          Node multinode-975382 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m33s                  node-controller  Node multinode-975382 event: Registered Node multinode-975382 in Controller
	
	
	Name:               multinode-975382-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-975382-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=multinode-975382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T10_04_15_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:02:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-975382-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 10:04:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:02:33 +0000   Mon, 15 Jan 2024 10:02:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:02:33 +0000   Mon, 15 Jan 2024 10:02:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:02:33 +0000   Mon, 15 Jan 2024 10:02:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:02:33 +0000   Mon, 15 Jan 2024 10:02:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    multinode-975382-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ad3b9e7541a43f5bb4662152fcf04c7
	  System UUID:                4ad3b9e7-541a-43f5-bb46-62152fcf04c7
	  Boot ID:                    218cbe2d-977f-4264-b8ee-4b4a0d915cea
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-g8s82    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-pd2q7               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-znv78            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 105s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet          Node multinode-975382-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet          Node multinode-975382-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet          Node multinode-975382-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet          Node multinode-975382-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m54s                  kubelet          Node multinode-975382-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m13s (x2 over 3m13s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 108s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  108s (x2 over 108s)    kubelet          Node multinode-975382-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s (x2 over 108s)    kubelet          Node multinode-975382-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s (x2 over 108s)    kubelet          Node multinode-975382-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  108s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                107s                   kubelet          Node multinode-975382-m02 status is now: NodeReady
	  Normal   RegisteredNode           103s                   node-controller  Node multinode-975382-m02 event: Registered Node multinode-975382-m02 in Controller
	
	
	Name:               multinode-975382-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-975382-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=multinode-975382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T10_04_15_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:04:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-975382-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:04:15 +0000   Mon, 15 Jan 2024 10:04:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:04:15 +0000   Mon, 15 Jan 2024 10:04:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:04:15 +0000   Mon, 15 Jan 2024 10:04:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:04:15 +0000   Mon, 15 Jan 2024 10:04:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    multinode-975382-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8865cf2193f14195b291c76bd8783f47
	  System UUID:                8865cf21-93f1-4195-b291-c76bd8783f47
	  Boot ID:                    9eed1e14-0d85-48ad-aa0c-bb0d5f52a18b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-bsnlw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kindnet-q2p7k               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-fxwtq            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 3s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-975382-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-975382-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-975382-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-975382-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-975382-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-975382-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-975382-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-975382-m03 status is now: NodeReady
	  Normal   NodeNotReady             72s                 kubelet     Node multinode-975382-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        41s (x2 over 101s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 6s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-975382-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-975382-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-975382-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-975382-m03 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	[Jan15 09:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067822] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.342613] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan15 10:00] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148857] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.607049] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.381353] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.110720] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.131405] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.097908] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.205648] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +17.089660] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[ +19.118394] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [d984d25305f6ce4daa0fe1947cc03bd843cfb0f96b420f76c90f034d386cfbdf] <==
	{"level":"info","ts":"2024-01-15T10:00:32.223863Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-01-15T10:00:32.22344Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-01-15T10:00:32.223585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(11573293933243462141)"}
	{"level":"info","ts":"2024-01-15T10:00:32.224235Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","added-peer-id":"a09c9983ac28f1fd","added-peer-peer-urls":["https://192.168.39.217:2380"]}
	{"level":"info","ts":"2024-01-15T10:00:32.224599Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T10:00:32.224721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T10:00:32.225458Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a09c9983ac28f1fd","initial-advertise-peer-urls":["https://192.168.39.217:2380"],"listen-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.217:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-15T10:00:32.225698Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-15T10:00:32.223658Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-15T10:00:32.227376Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-15T10:00:32.227604Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-15T10:00:33.706923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-15T10:00:33.707008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-15T10:00:33.707053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 2"}
	{"level":"info","ts":"2024-01-15T10:00:33.707072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became candidate at term 3"}
	{"level":"info","ts":"2024-01-15T10:00:33.707079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgVoteResp from a09c9983ac28f1fd at term 3"}
	{"level":"info","ts":"2024-01-15T10:00:33.70709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became leader at term 3"}
	{"level":"info","ts":"2024-01-15T10:00:33.707099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a09c9983ac28f1fd elected leader a09c9983ac28f1fd at term 3"}
	{"level":"info","ts":"2024-01-15T10:00:33.708639Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a09c9983ac28f1fd","local-member-attributes":"{Name:multinode-975382 ClientURLs:[https://192.168.39.217:2379]}","request-path":"/0/members/a09c9983ac28f1fd/attributes","cluster-id":"8f39477865362797","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T10:00:33.708807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T10:00:33.708914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T10:00:33.710473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.217:2379"}
	{"level":"info","ts":"2024-01-15T10:00:33.710477Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T10:00:33.710695Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T10:00:33.710739Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:04:20 up 4 min,  0 users,  load average: 0.29, 0.25, 0.11
	Linux multinode-975382 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [b71532763a383283fb3ff8da72c4e98d6055499d1f943a9352fc9f61d1e9a0b3] <==
	I0115 10:03:31.084092       1 main.go:250] Node multinode-975382-m03 has CIDR [10.244.3.0/24] 
	I0115 10:03:41.096907       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 10:03:41.097136       1 main.go:227] handling current node
	I0115 10:03:41.097235       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0115 10:03:41.097381       1 main.go:250] Node multinode-975382-m02 has CIDR [10.244.1.0/24] 
	I0115 10:03:41.097557       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0115 10:03:41.097617       1 main.go:250] Node multinode-975382-m03 has CIDR [10.244.3.0/24] 
	I0115 10:03:51.111300       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 10:03:51.111344       1 main.go:227] handling current node
	I0115 10:03:51.111355       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0115 10:03:51.111361       1 main.go:250] Node multinode-975382-m02 has CIDR [10.244.1.0/24] 
	I0115 10:03:51.111460       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0115 10:03:51.111490       1 main.go:250] Node multinode-975382-m03 has CIDR [10.244.3.0/24] 
	I0115 10:04:01.124577       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 10:04:01.124643       1 main.go:227] handling current node
	I0115 10:04:01.124659       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0115 10:04:01.124668       1 main.go:250] Node multinode-975382-m02 has CIDR [10.244.1.0/24] 
	I0115 10:04:01.124811       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0115 10:04:01.124858       1 main.go:250] Node multinode-975382-m03 has CIDR [10.244.3.0/24] 
	I0115 10:04:11.138588       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0115 10:04:11.138649       1 main.go:227] handling current node
	I0115 10:04:11.138661       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0115 10:04:11.138667       1 main.go:250] Node multinode-975382-m02 has CIDR [10.244.1.0/24] 
	I0115 10:04:11.138808       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0115 10:04:11.138839       1 main.go:250] Node multinode-975382-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f9bad3af2943ad663b35cf06c3c091a33ff99f46490e15b1c266e5de1a37a3d5] <==
	I0115 10:00:35.151370       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0115 10:00:35.151386       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0115 10:00:35.151401       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0115 10:00:35.172441       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0115 10:00:35.172536       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0115 10:00:35.172820       1 aggregator.go:166] initial CRD sync complete...
	I0115 10:00:35.172859       1 autoregister_controller.go:141] Starting autoregister controller
	I0115 10:00:35.172865       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0115 10:00:35.235858       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0115 10:00:35.242347       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0115 10:00:35.272016       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0115 10:00:35.272816       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0115 10:00:35.273637       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0115 10:00:35.272918       1 cache.go:39] Caches are synced for autoregister controller
	I0115 10:00:35.272930       1 shared_informer.go:318] Caches are synced for configmaps
	I0115 10:00:35.273610       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0115 10:00:35.286877       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0115 10:00:36.083689       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0115 10:00:38.047606       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0115 10:00:38.176748       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0115 10:00:38.196413       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0115 10:00:38.281216       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0115 10:00:38.289070       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0115 10:00:47.577919       1 controller.go:624] quota admission added evaluator for: endpoints
	I0115 10:00:47.693753       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [164aecf883783ec7423bc48f1a8e40c567eed7cb18bb83b02d9ff8b5450725df] <==
	I0115 10:02:32.958450       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-975382-m03"
	I0115 10:02:32.958664       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-pwx96" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-pwx96"
	I0115 10:02:32.958732       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-975382-m02\" does not exist"
	I0115 10:02:32.973560       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-975382-m02" podCIDRs=["10.244.1.0/24"]
	I0115 10:02:33.095037       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-975382-m02"
	I0115 10:02:33.856416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="245.066µs"
	I0115 10:02:37.685896       1 event.go:307] "Event occurred" object="multinode-975382-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-975382-m02 event: Registered Node multinode-975382-m02 in Controller"
	I0115 10:02:47.244780       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="105.181µs"
	I0115 10:02:47.721133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.007µs"
	I0115 10:02:47.725387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="144.269µs"
	I0115 10:03:08.879790       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-975382-m02"
	I0115 10:04:11.378062       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-g8s82"
	I0115 10:04:11.390757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.135653ms"
	I0115 10:04:11.406079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.243177ms"
	I0115 10:04:11.406778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.156µs"
	I0115 10:04:11.419819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.768µs"
	I0115 10:04:12.980793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.806639ms"
	I0115 10:04:12.981182       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.772µs"
	I0115 10:04:14.385609       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-975382-m02"
	I0115 10:04:15.053046       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-975382-m03\" does not exist"
	I0115 10:04:15.056751       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-975382-m02"
	I0115 10:04:15.057153       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-bsnlw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-bsnlw"
	I0115 10:04:15.072541       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-975382-m03" podCIDRs=["10.244.2.0/24"]
	I0115 10:04:15.197945       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-975382-m02"
	I0115 10:04:15.937855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.504µs"
	
	
	==> kube-proxy [f6638d1b848849ca204601706f298a41acb7aa6be3d039e66c94776dcea3d336] <==
	I0115 10:00:37.167042       1 server_others.go:69] "Using iptables proxy"
	I0115 10:00:37.216874       1 node.go:141] Successfully retrieved node IP: 192.168.39.217
	I0115 10:00:37.500331       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 10:00:37.500408       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 10:00:37.503184       1 server_others.go:152] "Using iptables Proxier"
	I0115 10:00:37.503379       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 10:00:37.503553       1 server.go:846] "Version info" version="v1.28.4"
	I0115 10:00:37.503676       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:00:37.504477       1 config.go:188] "Starting service config controller"
	I0115 10:00:37.504598       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 10:00:37.504646       1 config.go:97] "Starting endpoint slice config controller"
	I0115 10:00:37.504663       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 10:00:37.506164       1 config.go:315] "Starting node config controller"
	I0115 10:00:37.506935       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 10:00:37.606948       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 10:00:37.607028       1 shared_informer.go:318] Caches are synced for node config
	I0115 10:00:37.607053       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [a60d7a52a6b2daed9c929af487567ab26c0e5389e88c79eff9f26453d772b272] <==
	I0115 10:00:32.494175       1 serving.go:348] Generated self-signed cert in-memory
	W0115 10:00:35.164588       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0115 10:00:35.164640       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 10:00:35.164655       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0115 10:00:35.164663       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0115 10:00:35.188833       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0115 10:00:35.188977       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:00:35.191051       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0115 10:00:35.191101       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 10:00:35.192223       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0115 10:00:35.193582       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0115 10:00:35.292131       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 10:00:03 UTC, ends at Mon 2024-01-15 10:04:20 UTC. --
	Jan 15 10:00:39 multinode-975382 kubelet[919]: E0115 10:00:39.706619     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38f4390b-b4e4-467a-87f2-d4d4fc36cd18-kube-api-access-9jkn9 podName:38f4390b-b4e4-467a-87f2-d4d4fc36cd18 nodeName:}" failed. No retries permitted until 2024-01-15 10:00:43.706602222 +0000 UTC m=+15.022668266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9jkn9" (UniqueName: "kubernetes.io/projected/38f4390b-b4e4-467a-87f2-d4d4fc36cd18-kube-api-access-9jkn9") pod "busybox-5bc68d56bd-h2lk5" (UID: "38f4390b-b4e4-467a-87f2-d4d4fc36cd18") : object "default"/"kube-root-ca.crt" not registered
	Jan 15 10:00:39 multinode-975382 kubelet[919]: E0115 10:00:39.937903     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-n2sqg" podUID="f303a63a-c959-477e-89d5-c35bd0802b1b"
	Jan 15 10:00:39 multinode-975382 kubelet[919]: E0115 10:00:39.938455     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-h2lk5" podUID="38f4390b-b4e4-467a-87f2-d4d4fc36cd18"
	Jan 15 10:00:41 multinode-975382 kubelet[919]: E0115 10:00:41.938174     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-n2sqg" podUID="f303a63a-c959-477e-89d5-c35bd0802b1b"
	Jan 15 10:00:41 multinode-975382 kubelet[919]: E0115 10:00:41.938395     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-h2lk5" podUID="38f4390b-b4e4-467a-87f2-d4d4fc36cd18"
	Jan 15 10:00:43 multinode-975382 kubelet[919]: E0115 10:00:43.635081     919 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 15 10:00:43 multinode-975382 kubelet[919]: E0115 10:00:43.635202     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f303a63a-c959-477e-89d5-c35bd0802b1b-config-volume podName:f303a63a-c959-477e-89d5-c35bd0802b1b nodeName:}" failed. No retries permitted until 2024-01-15 10:00:51.635184792 +0000 UTC m=+22.951250836 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f303a63a-c959-477e-89d5-c35bd0802b1b-config-volume") pod "coredns-5dd5756b68-n2sqg" (UID: "f303a63a-c959-477e-89d5-c35bd0802b1b") : object "kube-system"/"coredns" not registered
	Jan 15 10:00:43 multinode-975382 kubelet[919]: E0115 10:00:43.735618     919 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 15 10:00:43 multinode-975382 kubelet[919]: E0115 10:00:43.735690     919 projected.go:198] Error preparing data for projected volume kube-api-access-9jkn9 for pod default/busybox-5bc68d56bd-h2lk5: object "default"/"kube-root-ca.crt" not registered
	Jan 15 10:00:43 multinode-975382 kubelet[919]: E0115 10:00:43.735745     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38f4390b-b4e4-467a-87f2-d4d4fc36cd18-kube-api-access-9jkn9 podName:38f4390b-b4e4-467a-87f2-d4d4fc36cd18 nodeName:}" failed. No retries permitted until 2024-01-15 10:00:51.735730515 +0000 UTC m=+23.051796560 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9jkn9" (UniqueName: "kubernetes.io/projected/38f4390b-b4e4-467a-87f2-d4d4fc36cd18-kube-api-access-9jkn9") pod "busybox-5bc68d56bd-h2lk5" (UID: "38f4390b-b4e4-467a-87f2-d4d4fc36cd18") : object "default"/"kube-root-ca.crt" not registered
	Jan 15 10:00:43 multinode-975382 kubelet[919]: E0115 10:00:43.938721     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-n2sqg" podUID="f303a63a-c959-477e-89d5-c35bd0802b1b"
	Jan 15 10:00:43 multinode-975382 kubelet[919]: E0115 10:00:43.938766     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-h2lk5" podUID="38f4390b-b4e4-467a-87f2-d4d4fc36cd18"
	Jan 15 10:01:07 multinode-975382 kubelet[919]: I0115 10:01:07.150825     919 scope.go:117] "RemoveContainer" containerID="bd8251f99cdb8a7317e7b02252ad8fcb5321266b5fa48cd40a428e92ecd1da8f"
	Jan 15 10:01:28 multinode-975382 kubelet[919]: E0115 10:01:28.962037     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:01:28 multinode-975382 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:01:28 multinode-975382 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:01:28 multinode-975382 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:02:28 multinode-975382 kubelet[919]: E0115 10:02:28.972576     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:02:28 multinode-975382 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:02:28 multinode-975382 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:02:28 multinode-975382 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:03:28 multinode-975382 kubelet[919]: E0115 10:03:28.962149     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:03:28 multinode-975382 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:03:28 multinode-975382 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:03:28 multinode-975382 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-975382 -n multinode-975382
E0115 10:04:21.452895   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-975382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (689.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-975382 stop: exit status 82 (2m1.384256193s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-975382"  ...
	* Stopping node "multinode-975382"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-975382 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 status
E0115 10:06:39.520062   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-975382 status: exit status 3 (18.814435937s)

                                                
                                                
-- stdout --
	multinode-975382
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-975382-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:06:43.454736   31986 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	E0115 10:06:43.454769   31986 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-975382 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-975382 -n multinode-975382
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-975382 -n multinode-975382: exit status 3 (3.174522071s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:06:46.782795   32097 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	E0115 10:06:46.782813   32097 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-975382" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.37s)

                                                
                                    
x
+
TestPreload (271.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-598240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0115 10:16:39.519362   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-598240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m9.446644245s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-598240 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-598240
E0115 10:17:15.934545   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:19:12.883403   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-598240: exit status 82 (2m1.703989087s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-598240"  ...
	* Stopping node "test-preload-598240"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-598240 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2024-01-15 10:19:14.595724398 +0000 UTC m=+3166.566891810
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-598240 -n test-preload-598240
E0115 10:19:21.454204   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-598240 -n test-preload-598240: exit status 3 (18.655261868s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:19:33.246740   35480 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.248:22: connect: no route to host
	E0115 10:19:33.246761   35480 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.248:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-598240" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-598240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-598240
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-598240: (1.120448489s)
--- FAIL: TestPreload (271.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-824502 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-824502 --alsologtostderr -v=3: exit status 82 (2m1.651725502s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-824502"  ...
	* Stopping node "no-preload-824502"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 10:30:17.023789   45090 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:30:17.023937   45090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:30:17.023944   45090 out.go:309] Setting ErrFile to fd 2...
	I0115 10:30:17.023951   45090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:30:17.024180   45090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:30:17.024399   45090 out.go:303] Setting JSON to false
	I0115 10:30:17.024477   45090 mustload.go:65] Loading cluster: no-preload-824502
	I0115 10:30:17.024862   45090 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:30:17.024958   45090 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/config.json ...
	I0115 10:30:17.025845   45090 mustload.go:65] Loading cluster: no-preload-824502
	I0115 10:30:17.025997   45090 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:30:17.026044   45090 stop.go:39] StopHost: no-preload-824502
	I0115 10:30:17.026657   45090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:30:17.026721   45090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:30:17.043747   45090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0115 10:30:17.044458   45090 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:30:17.045116   45090 main.go:141] libmachine: Using API Version  1
	I0115 10:30:17.045151   45090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:30:17.045668   45090 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:30:17.047728   45090 out.go:177] * Stopping node "no-preload-824502"  ...
	I0115 10:30:17.049276   45090 main.go:141] libmachine: Stopping "no-preload-824502"...
	I0115 10:30:17.049308   45090 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:30:17.051456   45090 main.go:141] libmachine: (no-preload-824502) Calling .Stop
	I0115 10:30:17.055277   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 0/60
	I0115 10:30:18.056543   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 1/60
	I0115 10:30:19.058026   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 2/60
	I0115 10:30:20.059414   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 3/60
	I0115 10:30:21.060462   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 4/60
	I0115 10:30:22.062583   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 5/60
	I0115 10:30:23.065060   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 6/60
	I0115 10:30:24.066449   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 7/60
	I0115 10:30:25.067928   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 8/60
	I0115 10:30:26.069451   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 9/60
	I0115 10:30:27.071589   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 10/60
	I0115 10:30:28.073002   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 11/60
	I0115 10:30:29.074770   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 12/60
	I0115 10:30:30.076109   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 13/60
	I0115 10:30:31.078252   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 14/60
	I0115 10:30:32.080144   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 15/60
	I0115 10:30:33.081844   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 16/60
	I0115 10:30:34.084187   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 17/60
	I0115 10:30:35.085791   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 18/60
	I0115 10:30:36.087087   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 19/60
	I0115 10:30:37.088514   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 20/60
	I0115 10:30:38.089909   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 21/60
	I0115 10:30:39.157642   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 22/60
	I0115 10:30:40.158899   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 23/60
	I0115 10:30:41.160962   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 24/60
	I0115 10:30:42.163007   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 25/60
	I0115 10:30:43.164467   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 26/60
	I0115 10:30:44.165904   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 27/60
	I0115 10:30:45.167290   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 28/60
	I0115 10:30:46.168894   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 29/60
	I0115 10:30:47.171031   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 30/60
	I0115 10:30:48.173012   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 31/60
	I0115 10:30:49.174342   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 32/60
	I0115 10:30:50.176331   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 33/60
	I0115 10:30:51.177826   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 34/60
	I0115 10:30:52.179868   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 35/60
	I0115 10:30:53.181333   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 36/60
	I0115 10:30:54.182486   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 37/60
	I0115 10:30:55.184451   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 38/60
	I0115 10:30:56.185621   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 39/60
	I0115 10:30:57.187618   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 40/60
	I0115 10:30:58.188834   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 41/60
	I0115 10:30:59.190157   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 42/60
	I0115 10:31:00.191583   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 43/60
	I0115 10:31:01.192969   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 44/60
	I0115 10:31:02.194874   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 45/60
	I0115 10:31:03.196797   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 46/60
	I0115 10:31:04.198136   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 47/60
	I0115 10:31:05.199620   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 48/60
	I0115 10:31:06.202086   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 49/60
	I0115 10:31:07.204360   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 50/60
	I0115 10:31:08.205606   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 51/60
	I0115 10:31:09.207143   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 52/60
	I0115 10:31:10.208469   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 53/60
	I0115 10:31:11.209755   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 54/60
	I0115 10:31:12.211856   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 55/60
	I0115 10:31:13.213698   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 56/60
	I0115 10:31:14.214921   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 57/60
	I0115 10:31:15.216343   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 58/60
	I0115 10:31:16.217783   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 59/60
	I0115 10:31:17.219160   45090 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0115 10:31:17.219205   45090 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:31:17.219226   45090 retry.go:31] will retry after 1.262092549s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:31:18.482608   45090 stop.go:39] StopHost: no-preload-824502
	I0115 10:31:18.483112   45090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:31:18.483168   45090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:31:18.497625   45090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
	I0115 10:31:18.498118   45090 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:31:18.498790   45090 main.go:141] libmachine: Using API Version  1
	I0115 10:31:18.498817   45090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:31:18.499253   45090 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:31:18.501552   45090 out.go:177] * Stopping node "no-preload-824502"  ...
	I0115 10:31:18.503019   45090 main.go:141] libmachine: Stopping "no-preload-824502"...
	I0115 10:31:18.503034   45090 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:31:18.504692   45090 main.go:141] libmachine: (no-preload-824502) Calling .Stop
	I0115 10:31:18.508173   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 0/60
	I0115 10:31:19.509492   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 1/60
	I0115 10:31:20.510991   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 2/60
	I0115 10:31:21.512322   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 3/60
	I0115 10:31:22.513818   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 4/60
	I0115 10:31:23.515823   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 5/60
	I0115 10:31:24.517353   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 6/60
	I0115 10:31:25.518839   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 7/60
	I0115 10:31:26.520273   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 8/60
	I0115 10:31:27.521560   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 9/60
	I0115 10:31:28.523238   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 10/60
	I0115 10:31:29.524883   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 11/60
	I0115 10:31:30.526267   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 12/60
	I0115 10:31:31.527514   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 13/60
	I0115 10:31:32.529054   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 14/60
	I0115 10:31:33.531100   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 15/60
	I0115 10:31:34.532787   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 16/60
	I0115 10:31:35.533873   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 17/60
	I0115 10:31:36.535153   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 18/60
	I0115 10:31:37.537301   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 19/60
	I0115 10:31:38.539037   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 20/60
	I0115 10:31:39.541143   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 21/60
	I0115 10:31:40.542288   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 22/60
	I0115 10:31:41.543774   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 23/60
	I0115 10:31:42.545270   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 24/60
	I0115 10:31:43.547009   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 25/60
	I0115 10:31:44.548723   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 26/60
	I0115 10:31:45.550156   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 27/60
	I0115 10:31:46.551568   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 28/60
	I0115 10:31:47.552913   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 29/60
	I0115 10:31:48.554930   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 30/60
	I0115 10:31:49.556301   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 31/60
	I0115 10:31:50.557861   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 32/60
	I0115 10:31:51.559152   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 33/60
	I0115 10:31:52.560735   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 34/60
	I0115 10:31:53.562587   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 35/60
	I0115 10:31:54.563966   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 36/60
	I0115 10:31:55.565405   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 37/60
	I0115 10:31:56.566811   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 38/60
	I0115 10:31:57.568130   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 39/60
	I0115 10:31:58.569739   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 40/60
	I0115 10:31:59.571000   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 41/60
	I0115 10:32:00.572361   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 42/60
	I0115 10:32:01.573832   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 43/60
	I0115 10:32:02.575263   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 44/60
	I0115 10:32:03.576923   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 45/60
	I0115 10:32:04.578127   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 46/60
	I0115 10:32:05.580103   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 47/60
	I0115 10:32:06.581290   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 48/60
	I0115 10:32:07.582783   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 49/60
	I0115 10:32:08.584457   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 50/60
	I0115 10:32:09.585799   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 51/60
	I0115 10:32:10.587037   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 52/60
	I0115 10:32:11.588735   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 53/60
	I0115 10:32:12.591139   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 54/60
	I0115 10:32:13.592609   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 55/60
	I0115 10:32:14.593973   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 56/60
	I0115 10:32:15.595415   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 57/60
	I0115 10:32:16.596672   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 58/60
	I0115 10:32:17.598034   45090 main.go:141] libmachine: (no-preload-824502) Waiting for machine to stop 59/60
	I0115 10:32:18.598377   45090 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0115 10:32:18.598432   45090 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:32:18.600598   45090 out.go:177] 
	W0115 10:32:18.602077   45090 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0115 10:32:18.602093   45090 out.go:239] * 
	* 
	W0115 10:32:18.604576   45090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 10:32:18.606066   45090 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-824502 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824502 -n no-preload-824502
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824502 -n no-preload-824502: exit status 3 (18.511027453s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:32:37.118668   46039 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.136:22: connect: no route to host
	E0115 10:32:37.118686   46039 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.136:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-824502" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-206509 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-206509 --alsologtostderr -v=3: exit status 82 (2m1.135859875s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-206509"  ...
	* Stopping node "old-k8s-version-206509"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 10:30:17.404568   45116 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:30:17.404854   45116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:30:17.404868   45116 out.go:309] Setting ErrFile to fd 2...
	I0115 10:30:17.404878   45116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:30:17.405464   45116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:30:17.405918   45116 out.go:303] Setting JSON to false
	I0115 10:30:17.406090   45116 mustload.go:65] Loading cluster: old-k8s-version-206509
	I0115 10:30:17.406585   45116 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:30:17.406692   45116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/config.json ...
	I0115 10:30:17.406913   45116 mustload.go:65] Loading cluster: old-k8s-version-206509
	I0115 10:30:17.407069   45116 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:30:17.407138   45116 stop.go:39] StopHost: old-k8s-version-206509
	I0115 10:30:17.407724   45116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:30:17.407810   45116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:30:17.424328   45116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0115 10:30:17.424777   45116 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:30:17.425467   45116 main.go:141] libmachine: Using API Version  1
	I0115 10:30:17.425490   45116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:30:17.426008   45116 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:30:17.428630   45116 out.go:177] * Stopping node "old-k8s-version-206509"  ...
	I0115 10:30:17.431019   45116 main.go:141] libmachine: Stopping "old-k8s-version-206509"...
	I0115 10:30:17.431046   45116 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:30:17.433496   45116 main.go:141] libmachine: (old-k8s-version-206509) Calling .Stop
	I0115 10:30:17.437331   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 0/60
	I0115 10:30:18.439083   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 1/60
	I0115 10:30:19.441260   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 2/60
	I0115 10:30:20.442941   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 3/60
	I0115 10:30:21.444577   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 4/60
	I0115 10:30:22.446649   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 5/60
	I0115 10:30:23.448807   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 6/60
	I0115 10:30:24.450280   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 7/60
	I0115 10:30:25.451703   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 8/60
	I0115 10:30:26.453180   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 9/60
	I0115 10:30:27.455431   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 10/60
	I0115 10:30:28.457147   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 11/60
	I0115 10:30:29.458496   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 12/60
	I0115 10:30:30.460127   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 13/60
	I0115 10:30:31.461673   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 14/60
	I0115 10:30:32.463685   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 15/60
	I0115 10:30:33.465079   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 16/60
	I0115 10:30:34.466579   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 17/60
	I0115 10:30:35.467864   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 18/60
	I0115 10:30:36.469673   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 19/60
	I0115 10:30:37.472101   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 20/60
	I0115 10:30:38.473650   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 21/60
	I0115 10:30:39.475558   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 22/60
	I0115 10:30:40.476924   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 23/60
	I0115 10:30:41.478395   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 24/60
	I0115 10:30:42.480204   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 25/60
	I0115 10:30:43.481834   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 26/60
	I0115 10:30:44.483423   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 27/60
	I0115 10:30:45.485013   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 28/60
	I0115 10:30:46.486720   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 29/60
	I0115 10:30:47.488869   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 30/60
	I0115 10:30:48.490367   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 31/60
	I0115 10:30:49.491822   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 32/60
	I0115 10:30:50.493264   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 33/60
	I0115 10:30:51.495301   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 34/60
	I0115 10:30:52.497173   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 35/60
	I0115 10:30:53.498903   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 36/60
	I0115 10:30:54.500174   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 37/60
	I0115 10:30:55.501662   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 38/60
	I0115 10:30:56.502902   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 39/60
	I0115 10:30:57.505091   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 40/60
	I0115 10:30:58.506381   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 41/60
	I0115 10:30:59.507645   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 42/60
	I0115 10:31:00.509058   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 43/60
	I0115 10:31:01.510275   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 44/60
	I0115 10:31:02.512354   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 45/60
	I0115 10:31:03.513698   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 46/60
	I0115 10:31:04.514944   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 47/60
	I0115 10:31:05.516728   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 48/60
	I0115 10:31:06.518407   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 49/60
	I0115 10:31:07.520659   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 50/60
	I0115 10:31:08.522150   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 51/60
	I0115 10:31:09.523626   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 52/60
	I0115 10:31:10.525300   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 53/60
	I0115 10:31:11.526353   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 54/60
	I0115 10:31:12.528164   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 55/60
	I0115 10:31:13.530117   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 56/60
	I0115 10:31:14.531362   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 57/60
	I0115 10:31:15.533042   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 58/60
	I0115 10:31:16.534496   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 59/60
	I0115 10:31:17.535836   45116 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0115 10:31:17.535895   45116 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:31:17.535916   45116 retry.go:31] will retry after 790.785467ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:31:18.327557   45116 stop.go:39] StopHost: old-k8s-version-206509
	I0115 10:31:18.327913   45116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:31:18.327953   45116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:31:18.342911   45116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I0115 10:31:18.343316   45116 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:31:18.343804   45116 main.go:141] libmachine: Using API Version  1
	I0115 10:31:18.343828   45116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:31:18.344235   45116 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:31:18.346218   45116 out.go:177] * Stopping node "old-k8s-version-206509"  ...
	I0115 10:31:18.347745   45116 main.go:141] libmachine: Stopping "old-k8s-version-206509"...
	I0115 10:31:18.347764   45116 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:31:18.349273   45116 main.go:141] libmachine: (old-k8s-version-206509) Calling .Stop
	I0115 10:31:18.352607   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 0/60
	I0115 10:31:19.354068   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 1/60
	I0115 10:31:20.355385   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 2/60
	I0115 10:31:21.356868   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 3/60
	I0115 10:31:22.358646   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 4/60
	I0115 10:31:23.360579   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 5/60
	I0115 10:31:24.362134   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 6/60
	I0115 10:31:25.363658   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 7/60
	I0115 10:31:26.365106   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 8/60
	I0115 10:31:27.366564   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 9/60
	I0115 10:31:28.368816   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 10/60
	I0115 10:31:29.370088   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 11/60
	I0115 10:31:30.371500   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 12/60
	I0115 10:31:31.373539   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 13/60
	I0115 10:31:32.374936   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 14/60
	I0115 10:31:33.377024   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 15/60
	I0115 10:31:34.378292   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 16/60
	I0115 10:31:35.379683   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 17/60
	I0115 10:31:36.381118   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 18/60
	I0115 10:31:37.382626   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 19/60
	I0115 10:31:38.385168   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 20/60
	I0115 10:31:39.386576   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 21/60
	I0115 10:31:40.387904   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 22/60
	I0115 10:31:41.389556   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 23/60
	I0115 10:31:42.390950   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 24/60
	I0115 10:31:43.393144   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 25/60
	I0115 10:31:44.394582   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 26/60
	I0115 10:31:45.396072   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 27/60
	I0115 10:31:46.397333   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 28/60
	I0115 10:31:47.398778   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 29/60
	I0115 10:31:48.400347   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 30/60
	I0115 10:31:49.401794   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 31/60
	I0115 10:31:50.403359   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 32/60
	I0115 10:31:51.405366   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 33/60
	I0115 10:31:52.406873   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 34/60
	I0115 10:31:53.409069   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 35/60
	I0115 10:31:54.411062   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 36/60
	I0115 10:31:55.412182   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 37/60
	I0115 10:31:56.413936   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 38/60
	I0115 10:31:57.415140   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 39/60
	I0115 10:31:58.416838   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 40/60
	I0115 10:31:59.418610   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 41/60
	I0115 10:32:00.420779   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 42/60
	I0115 10:32:01.422237   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 43/60
	I0115 10:32:02.423954   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 44/60
	I0115 10:32:03.426239   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 45/60
	I0115 10:32:04.427547   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 46/60
	I0115 10:32:05.428771   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 47/60
	I0115 10:32:06.430909   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 48/60
	I0115 10:32:07.432268   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 49/60
	I0115 10:32:08.434483   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 50/60
	I0115 10:32:09.435929   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 51/60
	I0115 10:32:10.437378   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 52/60
	I0115 10:32:11.438669   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 53/60
	I0115 10:32:12.440934   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 54/60
	I0115 10:32:13.443114   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 55/60
	I0115 10:32:14.445028   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 56/60
	I0115 10:32:15.446429   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 57/60
	I0115 10:32:16.447776   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 58/60
	I0115 10:32:17.449049   45116 main.go:141] libmachine: (old-k8s-version-206509) Waiting for machine to stop 59/60
	I0115 10:32:18.450050   45116 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0115 10:32:18.450091   45116 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:32:18.454276   45116 out.go:177] 
	W0115 10:32:18.455903   45116 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0115 10:32:18.455926   45116 out.go:239] * 
	* 
	W0115 10:32:18.458663   45116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 10:32:18.460015   45116 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-206509 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206509 -n old-k8s-version-206509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206509 -n old-k8s-version-206509: exit status 3 (18.657473157s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:32:37.118672   45989 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.70:22: connect: no route to host
	E0115 10:32:37.118688   45989 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.70:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-206509" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-781270 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-781270 --alsologtostderr -v=3: exit status 82 (2m1.119136257s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-781270"  ...
	* Stopping node "embed-certs-781270"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 10:30:34.701841   45272 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:30:34.701997   45272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:30:34.702008   45272 out.go:309] Setting ErrFile to fd 2...
	I0115 10:30:34.702016   45272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:30:34.702309   45272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:30:34.702646   45272 out.go:303] Setting JSON to false
	I0115 10:30:34.702738   45272 mustload.go:65] Loading cluster: embed-certs-781270
	I0115 10:30:34.703244   45272 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:30:34.703337   45272 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/config.json ...
	I0115 10:30:34.703570   45272 mustload.go:65] Loading cluster: embed-certs-781270
	I0115 10:30:34.703739   45272 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:30:34.703790   45272 stop.go:39] StopHost: embed-certs-781270
	I0115 10:30:34.704345   45272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:30:34.704408   45272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:30:34.719009   45272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I0115 10:30:34.719465   45272 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:30:34.720092   45272 main.go:141] libmachine: Using API Version  1
	I0115 10:30:34.720117   45272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:30:34.720559   45272 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:30:34.722977   45272 out.go:177] * Stopping node "embed-certs-781270"  ...
	I0115 10:30:34.724665   45272 main.go:141] libmachine: Stopping "embed-certs-781270"...
	I0115 10:30:34.724690   45272 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:30:34.726537   45272 main.go:141] libmachine: (embed-certs-781270) Calling .Stop
	I0115 10:30:34.730288   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 0/60
	I0115 10:30:35.731840   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 1/60
	I0115 10:30:36.733284   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 2/60
	I0115 10:30:37.734866   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 3/60
	I0115 10:30:38.736923   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 4/60
	I0115 10:30:39.739384   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 5/60
	I0115 10:30:40.741513   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 6/60
	I0115 10:30:41.743039   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 7/60
	I0115 10:30:42.744480   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 8/60
	I0115 10:30:43.745974   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 9/60
	I0115 10:30:44.748109   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 10/60
	I0115 10:30:45.749495   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 11/60
	I0115 10:30:46.750864   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 12/60
	I0115 10:30:47.752323   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 13/60
	I0115 10:30:48.753670   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 14/60
	I0115 10:30:49.755683   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 15/60
	I0115 10:30:50.757118   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 16/60
	I0115 10:30:51.758366   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 17/60
	I0115 10:30:52.759806   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 18/60
	I0115 10:30:53.761056   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 19/60
	I0115 10:30:54.763286   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 20/60
	I0115 10:30:55.764778   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 21/60
	I0115 10:30:56.766190   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 22/60
	I0115 10:30:57.767755   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 23/60
	I0115 10:30:58.769076   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 24/60
	I0115 10:30:59.770875   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 25/60
	I0115 10:31:00.772242   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 26/60
	I0115 10:31:01.773580   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 27/60
	I0115 10:31:02.775017   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 28/60
	I0115 10:31:03.777114   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 29/60
	I0115 10:31:04.778576   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 30/60
	I0115 10:31:05.781072   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 31/60
	I0115 10:31:06.782856   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 32/60
	I0115 10:31:07.785045   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 33/60
	I0115 10:31:08.787335   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 34/60
	I0115 10:31:09.789365   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 35/60
	I0115 10:31:10.790844   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 36/60
	I0115 10:31:11.792919   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 37/60
	I0115 10:31:12.794584   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 38/60
	I0115 10:31:13.795962   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 39/60
	I0115 10:31:14.798241   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 40/60
	I0115 10:31:15.799467   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 41/60
	I0115 10:31:16.800865   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 42/60
	I0115 10:31:17.802337   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 43/60
	I0115 10:31:18.803799   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 44/60
	I0115 10:31:19.805719   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 45/60
	I0115 10:31:20.807631   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 46/60
	I0115 10:31:21.809040   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 47/60
	I0115 10:31:22.810495   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 48/60
	I0115 10:31:23.812044   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 49/60
	I0115 10:31:24.813855   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 50/60
	I0115 10:31:25.815298   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 51/60
	I0115 10:31:26.816607   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 52/60
	I0115 10:31:27.818009   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 53/60
	I0115 10:31:28.819562   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 54/60
	I0115 10:31:29.821451   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 55/60
	I0115 10:31:30.822924   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 56/60
	I0115 10:31:31.824316   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 57/60
	I0115 10:31:32.826330   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 58/60
	I0115 10:31:33.828282   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 59/60
	I0115 10:31:34.829523   45272 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0115 10:31:34.829576   45272 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:31:34.829607   45272 retry.go:31] will retry after 805.376624ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:31:35.635516   45272 stop.go:39] StopHost: embed-certs-781270
	I0115 10:31:35.635857   45272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:31:35.635895   45272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:31:35.650488   45272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I0115 10:31:35.650896   45272 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:31:35.651445   45272 main.go:141] libmachine: Using API Version  1
	I0115 10:31:35.651497   45272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:31:35.651815   45272 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:31:35.654087   45272 out.go:177] * Stopping node "embed-certs-781270"  ...
	I0115 10:31:35.655529   45272 main.go:141] libmachine: Stopping "embed-certs-781270"...
	I0115 10:31:35.655543   45272 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:31:35.657099   45272 main.go:141] libmachine: (embed-certs-781270) Calling .Stop
	I0115 10:31:35.660401   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 0/60
	I0115 10:31:36.661683   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 1/60
	I0115 10:31:37.663243   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 2/60
	I0115 10:31:38.664698   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 3/60
	I0115 10:31:39.666020   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 4/60
	I0115 10:31:40.667595   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 5/60
	I0115 10:31:41.670040   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 6/60
	I0115 10:31:42.671373   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 7/60
	I0115 10:31:43.672848   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 8/60
	I0115 10:31:44.674086   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 9/60
	I0115 10:31:45.675754   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 10/60
	I0115 10:31:46.677039   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 11/60
	I0115 10:31:47.678463   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 12/60
	I0115 10:31:48.679776   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 13/60
	I0115 10:31:49.681000   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 14/60
	I0115 10:31:50.683092   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 15/60
	I0115 10:31:51.684434   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 16/60
	I0115 10:31:52.685669   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 17/60
	I0115 10:31:53.687104   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 18/60
	I0115 10:31:54.688282   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 19/60
	I0115 10:31:55.689539   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 20/60
	I0115 10:31:56.691153   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 21/60
	I0115 10:31:57.692271   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 22/60
	I0115 10:31:58.693426   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 23/60
	I0115 10:31:59.694550   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 24/60
	I0115 10:32:00.696350   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 25/60
	I0115 10:32:01.697863   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 26/60
	I0115 10:32:02.699369   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 27/60
	I0115 10:32:03.700779   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 28/60
	I0115 10:32:04.701914   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 29/60
	I0115 10:32:05.704026   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 30/60
	I0115 10:32:06.705529   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 31/60
	I0115 10:32:07.706994   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 32/60
	I0115 10:32:08.708259   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 33/60
	I0115 10:32:09.709559   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 34/60
	I0115 10:32:10.711704   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 35/60
	I0115 10:32:11.712971   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 36/60
	I0115 10:32:12.714202   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 37/60
	I0115 10:32:13.715448   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 38/60
	I0115 10:32:14.716828   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 39/60
	I0115 10:32:15.718680   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 40/60
	I0115 10:32:16.720100   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 41/60
	I0115 10:32:17.721524   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 42/60
	I0115 10:32:18.723208   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 43/60
	I0115 10:32:19.724779   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 44/60
	I0115 10:32:20.726877   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 45/60
	I0115 10:32:21.729154   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 46/60
	I0115 10:32:22.730610   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 47/60
	I0115 10:32:23.732911   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 48/60
	I0115 10:32:24.734155   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 49/60
	I0115 10:32:25.736145   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 50/60
	I0115 10:32:26.737843   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 51/60
	I0115 10:32:27.739705   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 52/60
	I0115 10:32:28.740975   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 53/60
	I0115 10:32:29.742085   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 54/60
	I0115 10:32:30.743746   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 55/60
	I0115 10:32:31.745002   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 56/60
	I0115 10:32:32.746364   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 57/60
	I0115 10:32:33.747832   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 58/60
	I0115 10:32:34.749006   45272 main.go:141] libmachine: (embed-certs-781270) Waiting for machine to stop 59/60
	I0115 10:32:35.750030   45272 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0115 10:32:35.750072   45272 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:32:35.752028   45272 out.go:177] 
	W0115 10:32:35.753349   45272 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0115 10:32:35.753365   45272 out.go:239] * 
	* 
	W0115 10:32:35.755757   45272 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 10:32:35.757350   45272 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-781270 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-781270 -n embed-certs-781270
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-781270 -n embed-certs-781270: exit status 3 (18.510917283s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:32:54.270762   46169 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.222:22: connect: no route to host
	E0115 10:32:54.270781   46169 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.222:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-781270" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-709012 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-709012 --alsologtostderr -v=3: exit status 82 (2m1.038358424s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-709012"  ...
	* Stopping node "default-k8s-diff-port-709012"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 10:32:27.977423   46140 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:32:27.977569   46140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:32:27.977579   46140 out.go:309] Setting ErrFile to fd 2...
	I0115 10:32:27.977586   46140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:32:27.977802   46140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:32:27.978047   46140 out.go:303] Setting JSON to false
	I0115 10:32:27.978142   46140 mustload.go:65] Loading cluster: default-k8s-diff-port-709012
	I0115 10:32:27.978542   46140 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:32:27.978626   46140 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:32:27.978808   46140 mustload.go:65] Loading cluster: default-k8s-diff-port-709012
	I0115 10:32:27.979009   46140 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:32:27.979079   46140 stop.go:39] StopHost: default-k8s-diff-port-709012
	I0115 10:32:27.979520   46140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:32:27.979600   46140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:32:27.994382   46140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0115 10:32:27.994920   46140 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:32:27.995471   46140 main.go:141] libmachine: Using API Version  1
	I0115 10:32:27.995539   46140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:32:27.995962   46140 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:32:27.998260   46140 out.go:177] * Stopping node "default-k8s-diff-port-709012"  ...
	I0115 10:32:27.999954   46140 main.go:141] libmachine: Stopping "default-k8s-diff-port-709012"...
	I0115 10:32:27.999977   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:32:28.001685   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Stop
	I0115 10:32:28.005051   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 0/60
	I0115 10:32:29.006307   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 1/60
	I0115 10:32:30.007557   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 2/60
	I0115 10:32:31.008850   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 3/60
	I0115 10:32:32.010128   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 4/60
	I0115 10:32:33.012110   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 5/60
	I0115 10:32:34.013551   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 6/60
	I0115 10:32:35.014776   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 7/60
	I0115 10:32:36.016034   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 8/60
	I0115 10:32:37.017581   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 9/60
	I0115 10:32:38.018971   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 10/60
	I0115 10:32:39.020287   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 11/60
	I0115 10:32:40.021549   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 12/60
	I0115 10:32:41.023082   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 13/60
	I0115 10:32:42.024287   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 14/60
	I0115 10:32:43.026099   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 15/60
	I0115 10:32:44.027402   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 16/60
	I0115 10:32:45.028793   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 17/60
	I0115 10:32:46.030082   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 18/60
	I0115 10:32:47.031308   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 19/60
	I0115 10:32:48.033349   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 20/60
	I0115 10:32:49.034754   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 21/60
	I0115 10:32:50.036197   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 22/60
	I0115 10:32:51.037522   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 23/60
	I0115 10:32:52.038953   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 24/60
	I0115 10:32:53.040922   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 25/60
	I0115 10:32:54.042276   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 26/60
	I0115 10:32:55.043486   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 27/60
	I0115 10:32:56.044910   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 28/60
	I0115 10:32:57.046238   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 29/60
	I0115 10:32:58.048463   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 30/60
	I0115 10:32:59.049737   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 31/60
	I0115 10:33:00.051080   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 32/60
	I0115 10:33:01.052434   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 33/60
	I0115 10:33:02.053941   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 34/60
	I0115 10:33:03.055700   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 35/60
	I0115 10:33:04.056954   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 36/60
	I0115 10:33:05.058428   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 37/60
	I0115 10:33:06.059759   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 38/60
	I0115 10:33:07.061050   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 39/60
	I0115 10:33:08.063361   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 40/60
	I0115 10:33:09.064625   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 41/60
	I0115 10:33:10.065879   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 42/60
	I0115 10:33:11.067151   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 43/60
	I0115 10:33:12.068396   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 44/60
	I0115 10:33:13.070430   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 45/60
	I0115 10:33:14.071827   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 46/60
	I0115 10:33:15.073110   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 47/60
	I0115 10:33:16.074375   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 48/60
	I0115 10:33:17.075788   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 49/60
	I0115 10:33:18.078046   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 50/60
	I0115 10:33:19.079434   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 51/60
	I0115 10:33:20.080814   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 52/60
	I0115 10:33:21.082187   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 53/60
	I0115 10:33:22.083756   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 54/60
	I0115 10:33:23.085771   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 55/60
	I0115 10:33:24.088155   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 56/60
	I0115 10:33:25.089567   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 57/60
	I0115 10:33:26.091039   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 58/60
	I0115 10:33:27.092213   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 59/60
	I0115 10:33:28.093603   46140 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0115 10:33:28.093649   46140 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:33:28.093674   46140 retry.go:31] will retry after 744.896179ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:33:28.839671   46140 stop.go:39] StopHost: default-k8s-diff-port-709012
	I0115 10:33:28.840043   46140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:33:28.840085   46140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:33:28.854278   46140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40135
	I0115 10:33:28.854743   46140 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:33:28.855248   46140 main.go:141] libmachine: Using API Version  1
	I0115 10:33:28.855278   46140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:33:28.855611   46140 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:33:28.857875   46140 out.go:177] * Stopping node "default-k8s-diff-port-709012"  ...
	I0115 10:33:28.859507   46140 main.go:141] libmachine: Stopping "default-k8s-diff-port-709012"...
	I0115 10:33:28.859523   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:33:28.861186   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Stop
	I0115 10:33:28.864644   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 0/60
	I0115 10:33:29.865939   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 1/60
	I0115 10:33:30.867397   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 2/60
	I0115 10:33:31.868835   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 3/60
	I0115 10:33:32.870222   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 4/60
	I0115 10:33:33.871676   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 5/60
	I0115 10:33:34.873021   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 6/60
	I0115 10:33:35.874462   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 7/60
	I0115 10:33:36.875650   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 8/60
	I0115 10:33:37.877003   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 9/60
	I0115 10:33:38.879079   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 10/60
	I0115 10:33:39.880594   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 11/60
	I0115 10:33:40.881824   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 12/60
	I0115 10:33:41.883103   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 13/60
	I0115 10:33:42.884582   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 14/60
	I0115 10:33:43.886593   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 15/60
	I0115 10:33:44.888774   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 16/60
	I0115 10:33:45.890092   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 17/60
	I0115 10:33:46.891473   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 18/60
	I0115 10:33:47.892743   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 19/60
	I0115 10:33:48.894205   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 20/60
	I0115 10:33:49.895419   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 21/60
	I0115 10:33:50.896837   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 22/60
	I0115 10:33:51.898120   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 23/60
	I0115 10:33:52.899491   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 24/60
	I0115 10:33:53.901660   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 25/60
	I0115 10:33:54.903100   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 26/60
	I0115 10:33:55.904867   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 27/60
	I0115 10:33:56.906169   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 28/60
	I0115 10:33:57.907647   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 29/60
	I0115 10:33:58.909283   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 30/60
	I0115 10:33:59.910644   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 31/60
	I0115 10:34:00.911843   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 32/60
	I0115 10:34:01.913236   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 33/60
	I0115 10:34:02.914484   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 34/60
	I0115 10:34:03.916651   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 35/60
	I0115 10:34:04.917874   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 36/60
	I0115 10:34:05.919225   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 37/60
	I0115 10:34:06.920503   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 38/60
	I0115 10:34:07.921910   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 39/60
	I0115 10:34:08.923629   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 40/60
	I0115 10:34:09.925068   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 41/60
	I0115 10:34:10.926478   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 42/60
	I0115 10:34:11.927688   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 43/60
	I0115 10:34:12.929020   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 44/60
	I0115 10:34:13.930510   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 45/60
	I0115 10:34:14.931933   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 46/60
	I0115 10:34:15.933409   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 47/60
	I0115 10:34:16.934775   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 48/60
	I0115 10:34:17.936181   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 49/60
	I0115 10:34:18.937686   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 50/60
	I0115 10:34:19.939147   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 51/60
	I0115 10:34:20.940546   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 52/60
	I0115 10:34:21.942039   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 53/60
	I0115 10:34:22.943333   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 54/60
	I0115 10:34:23.945549   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 55/60
	I0115 10:34:24.946919   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 56/60
	I0115 10:34:25.948185   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 57/60
	I0115 10:34:26.949466   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 58/60
	I0115 10:34:27.950717   46140 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for machine to stop 59/60
	I0115 10:34:28.952176   46140 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0115 10:34:28.952219   46140 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0115 10:34:28.954343   46140 out.go:177] 
	W0115 10:34:28.956026   46140 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0115 10:34:28.956039   46140 out.go:239] * 
	* 
	W0115 10:34:28.958325   46140 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 10:34:28.959905   46140 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-709012 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012: exit status 3 (18.461632329s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:34:47.422773   46889 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0115 10:34:47.422796   46889 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-709012" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206509 -n old-k8s-version-206509
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206509 -n old-k8s-version-206509: exit status 3 (3.199465586s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:32:40.318739   46210 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.70:22: connect: no route to host
	E0115 10:32:40.318777   46210 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.70:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-206509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-206509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153614353s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.70:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-206509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206509 -n old-k8s-version-206509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206509 -n old-k8s-version-206509: exit status 3 (3.062212492s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:32:49.534832   46317 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.70:22: connect: no route to host
	E0115 10:32:49.534863   46317 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.70:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-206509" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824502 -n no-preload-824502
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824502 -n no-preload-824502: exit status 3 (3.199842276s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:32:40.318762   46211 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.136:22: connect: no route to host
	E0115 10:32:40.318779   46211 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.136:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-824502 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-824502 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152298667s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-824502 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824502 -n no-preload-824502
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824502 -n no-preload-824502: exit status 3 (3.063429486s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:32:49.534838   46316 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.136:22: connect: no route to host
	E0115 10:32:49.534857   46316 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.136:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-824502" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-781270 -n embed-certs-781270
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-781270 -n embed-certs-781270: exit status 3 (3.167772722s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:32:57.438747   46472 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.222:22: connect: no route to host
	E0115 10:32:57.438778   46472 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.222:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-781270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-781270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154176512s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.222:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-781270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-781270 -n embed-certs-781270
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-781270 -n embed-certs-781270: exit status 3 (3.061965128s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:33:06.654857   46554 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.222:22: connect: no route to host
	E0115 10:33:06.654879   46554 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.222:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-781270" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012: exit status 3 (3.167542772s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:34:50.590717   46963 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0115 10:34:50.590736   46963 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-709012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-709012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.158754787s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-709012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012: exit status 3 (3.057336018s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 10:34:59.806799   47022 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0115 10:34:59.806815   47022 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-709012" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-781270 -n embed-certs-781270
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-15 10:52:09.359352967 +0000 UTC m=+5141.330520382
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-781270 -n embed-certs-781270
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-781270 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-781270 logs -n 25: (1.736941177s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-967423 -- sudo                         | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-967423                                 | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-317803                           | kubernetes-upgrade-317803    | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-824502             | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-206509        | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-781270            | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-802186 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | disable-driver-mounts-802186                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:32 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-709012  | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-206509             | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-824502                  | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-781270                 | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:33 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-709012       | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC | 15 Jan 24 10:43 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:34:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:34:59.863813   47063 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:34:59.864093   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864103   47063 out.go:309] Setting ErrFile to fd 2...
	I0115 10:34:59.864108   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864345   47063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:34:59.864916   47063 out.go:303] Setting JSON to false
	I0115 10:34:59.865821   47063 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4600,"bootTime":1705310300,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 10:34:59.865878   47063 start.go:138] virtualization: kvm guest
	I0115 10:34:59.868392   47063 out.go:177] * [default-k8s-diff-port-709012] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 10:34:59.869886   47063 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 10:34:59.869920   47063 notify.go:220] Checking for updates...
	I0115 10:34:59.871289   47063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:34:59.872699   47063 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:34:59.874242   47063 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:34:59.875739   47063 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 10:34:59.877248   47063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 10:34:59.879143   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:34:59.879618   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.879682   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.893745   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I0115 10:34:59.894091   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.894610   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.894633   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.894933   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.895112   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.895305   47063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:34:59.895579   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.895611   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.909045   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0115 10:34:59.909415   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.909868   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.909886   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.910173   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.910346   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.943453   47063 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 10:34:59.945154   47063 start.go:298] selected driver: kvm2
	I0115 10:34:59.945164   47063 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.945252   47063 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 10:34:59.945926   47063 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.945991   47063 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 10:34:59.959656   47063 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 10:34:59.960028   47063 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 10:34:59.960078   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:34:59.960091   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:34:59.960106   47063 start_flags.go:321] config:
	{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-70901
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.960261   47063 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.962534   47063 out.go:177] * Starting control plane node default-k8s-diff-port-709012 in cluster default-k8s-diff-port-709012
	I0115 10:35:00.734685   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:34:59.963970   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:34:59.964003   47063 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 10:34:59.964012   47063 cache.go:56] Caching tarball of preloaded images
	I0115 10:34:59.964081   47063 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 10:34:59.964090   47063 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:34:59.964172   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:34:59.964356   47063 start.go:365] acquiring machines lock for default-k8s-diff-port-709012: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:35:06.814638   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:09.886665   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:15.966704   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:19.038663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:25.118649   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:28.190674   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:34.270660   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:37.342618   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:43.422663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:46.494729   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:52.574698   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:55.646737   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:01.726677   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:04.798681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:10.878645   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:13.950716   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:20.030691   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:23.102681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:29.182668   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:32.254641   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:38.334686   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:41.406690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:47.486639   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:50.558690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:56.638684   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:59.710581   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:05.790664   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:08.862738   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:14.942615   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:18.014720   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:24.094644   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:27.098209   46387 start.go:369] acquired machines lock for "old-k8s-version-206509" in 4m37.373222591s
	I0115 10:37:27.098259   46387 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:27.098264   46387 fix.go:54] fixHost starting: 
	I0115 10:37:27.098603   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:27.098633   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:27.112818   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37153
	I0115 10:37:27.113206   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:27.113638   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:37:27.113660   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:27.113943   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:27.114126   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:27.114270   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:37:27.115824   46387 fix.go:102] recreateIfNeeded on old-k8s-version-206509: state=Stopped err=<nil>
	I0115 10:37:27.115846   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	W0115 10:37:27.116007   46387 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:27.118584   46387 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-206509" ...
	I0115 10:37:27.119985   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Start
	I0115 10:37:27.120145   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring networks are active...
	I0115 10:37:27.120788   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network default is active
	I0115 10:37:27.121077   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network mk-old-k8s-version-206509 is active
	I0115 10:37:27.121463   46387 main.go:141] libmachine: (old-k8s-version-206509) Getting domain xml...
	I0115 10:37:27.122185   46387 main.go:141] libmachine: (old-k8s-version-206509) Creating domain...
	I0115 10:37:28.295990   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting to get IP...
	I0115 10:37:28.297038   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.297393   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.297470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.297380   47440 retry.go:31] will retry after 254.616903ms: waiting for machine to come up
	I0115 10:37:28.553730   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.554213   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.554238   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.554159   47440 retry.go:31] will retry after 350.995955ms: waiting for machine to come up
	I0115 10:37:28.906750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.907189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.907222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.907146   47440 retry.go:31] will retry after 441.292217ms: waiting for machine to come up
	I0115 10:37:29.349643   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.350011   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.350042   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.349959   47440 retry.go:31] will retry after 544.431106ms: waiting for machine to come up
	I0115 10:37:27.096269   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:27.096303   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:37:27.098084   46388 machine.go:91] provisioned docker machine in 4m37.366643974s
	I0115 10:37:27.098120   46388 fix.go:56] fixHost completed within 4m37.388460167s
	I0115 10:37:27.098126   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 4m37.388479036s
	W0115 10:37:27.098153   46388 start.go:694] error starting host: provision: host is not running
	W0115 10:37:27.098242   46388 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0115 10:37:27.098252   46388 start.go:709] Will try again in 5 seconds ...
	I0115 10:37:29.895609   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.896157   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.896189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.896032   47440 retry.go:31] will retry after 489.420436ms: waiting for machine to come up
	I0115 10:37:30.386614   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:30.387037   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:30.387071   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:30.387005   47440 retry.go:31] will retry after 779.227065ms: waiting for machine to come up
	I0115 10:37:31.167934   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:31.168316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:31.168343   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:31.168273   47440 retry.go:31] will retry after 878.328646ms: waiting for machine to come up
	I0115 10:37:32.048590   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:32.048976   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:32.049001   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:32.048920   47440 retry.go:31] will retry after 1.282650862s: waiting for machine to come up
	I0115 10:37:33.333699   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:33.334132   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:33.334161   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:33.334078   47440 retry.go:31] will retry after 1.548948038s: waiting for machine to come up
	I0115 10:37:32.100253   46388 start.go:365] acquiring machines lock for no-preload-824502: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:37:34.884455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:34.884845   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:34.884866   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:34.884800   47440 retry.go:31] will retry after 1.555315627s: waiting for machine to come up
	I0115 10:37:36.441833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:36.442329   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:36.442352   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:36.442281   47440 retry.go:31] will retry after 1.803564402s: waiting for machine to come up
	I0115 10:37:38.247833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:38.248241   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:38.248283   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:38.248213   47440 retry.go:31] will retry after 3.514521425s: waiting for machine to come up
	I0115 10:37:41.766883   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:41.767187   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:41.767222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:41.767154   47440 retry.go:31] will retry after 4.349871716s: waiting for machine to come up
	I0115 10:37:47.571869   46584 start.go:369] acquired machines lock for "embed-certs-781270" in 4m40.757219204s
	I0115 10:37:47.571928   46584 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:47.571936   46584 fix.go:54] fixHost starting: 
	I0115 10:37:47.572344   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:47.572382   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:47.591532   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0115 10:37:47.591905   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:47.592471   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:37:47.592513   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:47.592835   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:47.593060   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:37:47.593221   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:37:47.594825   46584 fix.go:102] recreateIfNeeded on embed-certs-781270: state=Stopped err=<nil>
	I0115 10:37:47.594856   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	W0115 10:37:47.595015   46584 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:47.597457   46584 out.go:177] * Restarting existing kvm2 VM for "embed-certs-781270" ...
	I0115 10:37:46.118479   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.118936   46387 main.go:141] libmachine: (old-k8s-version-206509) Found IP for machine: 192.168.61.70
	I0115 10:37:46.118960   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserving static IP address...
	I0115 10:37:46.118978   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has current primary IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.119402   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.119425   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserved static IP address: 192.168.61.70
	I0115 10:37:46.119441   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | skip adding static IP to network mk-old-k8s-version-206509 - found existing host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"}
	I0115 10:37:46.119455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Getting to WaitForSSH function...
	I0115 10:37:46.119467   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting for SSH to be available...
	I0115 10:37:46.121874   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122204   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.122236   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122340   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH client type: external
	I0115 10:37:46.122364   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa (-rw-------)
	I0115 10:37:46.122452   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:37:46.122476   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | About to run SSH command:
	I0115 10:37:46.122492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | exit 0
	I0115 10:37:46.214102   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | SSH cmd err, output: <nil>: 
	I0115 10:37:46.214482   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetConfigRaw
	I0115 10:37:46.215064   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.217294   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217579   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.217618   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217784   46387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/config.json ...
	I0115 10:37:46.218001   46387 machine.go:88] provisioning docker machine ...
	I0115 10:37:46.218022   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:46.218242   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218440   46387 buildroot.go:166] provisioning hostname "old-k8s-version-206509"
	I0115 10:37:46.218462   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218593   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.220842   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221188   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.221226   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221374   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.221525   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221662   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221760   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.221905   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.222391   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.222411   46387 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-206509 && echo "old-k8s-version-206509" | sudo tee /etc/hostname
	I0115 10:37:46.354906   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-206509
	
	I0115 10:37:46.354939   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.357679   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358051   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.358089   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358245   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.358470   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358642   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358799   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.358957   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.359291   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.359318   46387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-206509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-206509/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-206509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:37:46.491369   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:46.491397   46387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:37:46.491413   46387 buildroot.go:174] setting up certificates
	I0115 10:37:46.491422   46387 provision.go:83] configureAuth start
	I0115 10:37:46.491430   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.491687   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.494369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.494779   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494863   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.496985   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497338   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.497368   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497537   46387 provision.go:138] copyHostCerts
	I0115 10:37:46.497598   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:37:46.497613   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:37:46.497694   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:37:46.497806   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:37:46.497818   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:37:46.497848   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:37:46.497925   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:37:46.497945   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:37:46.497982   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:37:46.498043   46387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-206509 san=[192.168.61.70 192.168.61.70 localhost 127.0.0.1 minikube old-k8s-version-206509]
	I0115 10:37:46.824648   46387 provision.go:172] copyRemoteCerts
	I0115 10:37:46.824702   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:37:46.824723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.827470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827785   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.827818   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827972   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.828174   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.828336   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.828484   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:46.919822   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:37:46.941728   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:37:46.963042   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0115 10:37:46.983757   46387 provision.go:86] duration metric: configureAuth took 492.325875ms
	I0115 10:37:46.983777   46387 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:37:46.983966   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:37:46.984048   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.986525   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.986843   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.986869   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.987107   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.987323   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987503   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987651   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.987795   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.988198   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.988219   46387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:37:47.308225   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:37:47.308256   46387 machine.go:91] provisioned docker machine in 1.090242192s
	I0115 10:37:47.308269   46387 start.go:300] post-start starting for "old-k8s-version-206509" (driver="kvm2")
	I0115 10:37:47.308284   46387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:37:47.308310   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.308641   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:37:47.308674   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.311316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311665   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.311700   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311835   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.312024   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.312190   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.312315   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.407169   46387 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:37:47.411485   46387 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:37:47.411504   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:37:47.411566   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:37:47.411637   46387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:37:47.411715   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:37:47.419976   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:47.446992   46387 start.go:303] post-start completed in 138.700951ms
	I0115 10:37:47.447013   46387 fix.go:56] fixHost completed within 20.348748891s
	I0115 10:37:47.447031   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.449638   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.449996   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.450048   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.450136   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.450309   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450620   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.450749   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:47.451070   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:47.451085   46387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:37:47.571711   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315067.520557177
	
	I0115 10:37:47.571729   46387 fix.go:206] guest clock: 1705315067.520557177
	I0115 10:37:47.571748   46387 fix.go:219] Guest: 2024-01-15 10:37:47.520557177 +0000 UTC Remote: 2024-01-15 10:37:47.447016864 +0000 UTC m=+297.904172196 (delta=73.540313ms)
	I0115 10:37:47.571772   46387 fix.go:190] guest clock delta is within tolerance: 73.540313ms
	I0115 10:37:47.571782   46387 start.go:83] releasing machines lock for "old-k8s-version-206509", held for 20.473537585s
	I0115 10:37:47.571810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.572157   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:47.574952   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575328   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.575366   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.575957   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576146   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576232   46387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:37:47.576273   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.576381   46387 ssh_runner.go:195] Run: cat /version.json
	I0115 10:37:47.576406   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.578863   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579052   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579218   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579248   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579347   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579378   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579385   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579577   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579583   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579775   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.579810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579912   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.580094   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.580316   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.702555   46387 ssh_runner.go:195] Run: systemctl --version
	I0115 10:37:47.708309   46387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:37:47.862103   46387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:37:47.869243   46387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:37:47.869321   46387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:37:47.886013   46387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:37:47.886033   46387 start.go:475] detecting cgroup driver to use...
	I0115 10:37:47.886093   46387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:37:47.901265   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:37:47.913762   46387 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:37:47.913815   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:37:47.926880   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:37:47.942744   46387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:37:48.050667   46387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:37:48.168614   46387 docker.go:233] disabling docker service ...
	I0115 10:37:48.168679   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:37:48.181541   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:37:48.193155   46387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:37:48.312374   46387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:37:48.420624   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:37:48.432803   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:37:48.449232   46387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0115 10:37:48.449292   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.458042   46387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:37:48.458109   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.466909   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.475511   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.484081   46387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:37:48.493186   46387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:37:48.502460   46387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:37:48.502507   46387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:37:48.514913   46387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:37:48.522816   46387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:37:48.630774   46387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:37:48.807089   46387 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:37:48.807170   46387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:37:48.812950   46387 start.go:543] Will wait 60s for crictl version
	I0115 10:37:48.813005   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:48.816919   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:37:48.860058   46387 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:37:48.860143   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.916839   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.968312   46387 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0115 10:37:48.969913   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:48.972776   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973219   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:48.973249   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973519   46387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0115 10:37:48.977593   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:48.990551   46387 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 10:37:48.990613   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:49.030917   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:49.030973   46387 ssh_runner.go:195] Run: which lz4
	I0115 10:37:49.035059   46387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:37:49.039231   46387 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:37:49.039262   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0115 10:37:47.598904   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Start
	I0115 10:37:47.599102   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring networks are active...
	I0115 10:37:47.599886   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network default is active
	I0115 10:37:47.600258   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network mk-embed-certs-781270 is active
	I0115 10:37:47.600652   46584 main.go:141] libmachine: (embed-certs-781270) Getting domain xml...
	I0115 10:37:47.601365   46584 main.go:141] libmachine: (embed-certs-781270) Creating domain...
	I0115 10:37:48.842510   46584 main.go:141] libmachine: (embed-certs-781270) Waiting to get IP...
	I0115 10:37:48.843267   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:48.843637   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:48.843731   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:48.843603   47574 retry.go:31] will retry after 262.69562ms: waiting for machine to come up
	I0115 10:37:49.108361   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.108861   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.108901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.108796   47574 retry.go:31] will retry after 379.820541ms: waiting for machine to come up
	I0115 10:37:49.490343   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.490939   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.490979   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.490898   47574 retry.go:31] will retry after 463.282743ms: waiting for machine to come up
	I0115 10:37:49.956222   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.956694   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.956725   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.956646   47574 retry.go:31] will retry after 539.780461ms: waiting for machine to come up
	I0115 10:37:50.498391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:50.498901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:50.498935   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:50.498849   47574 retry.go:31] will retry after 611.580301ms: waiting for machine to come up
	I0115 10:37:51.111752   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.112228   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.112263   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.112194   47574 retry.go:31] will retry after 837.335782ms: waiting for machine to come up
	I0115 10:37:50.824399   46387 crio.go:444] Took 1.789376 seconds to copy over tarball
	I0115 10:37:50.824466   46387 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:37:53.837707   46387 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013210203s)
	I0115 10:37:53.837742   46387 crio.go:451] Took 3.013322 seconds to extract the tarball
	I0115 10:37:53.837753   46387 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:37:53.876939   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:53.922125   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:53.922161   46387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:37:53.922213   46387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:53.922249   46387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.922267   46387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.922300   46387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.922520   46387 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.922527   46387 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.922544   46387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.922547   46387 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.923794   46387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.923809   46387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.923811   46387 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.923807   46387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.923785   46387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.923843   46387 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.083650   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0115 10:37:54.090328   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.095213   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.123642   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.124012   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.139399   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.139406   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.207117   46387 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0115 10:37:54.207170   46387 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0115 10:37:54.207168   46387 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0115 10:37:54.207202   46387 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.207230   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.207248   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.248774   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.269586   46387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0115 10:37:54.269636   46387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.269661   46387 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0115 10:37:54.269693   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.269693   46387 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.269785   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404758   46387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0115 10:37:54.404862   46387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0115 10:37:54.404907   46387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.404969   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0115 10:37:54.404996   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404873   46387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.405034   46387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0115 10:37:54.405064   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404975   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.405082   46387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.405174   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.405202   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.405149   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.502357   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.502402   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0115 10:37:54.502507   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0115 10:37:54.502547   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.502504   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.502620   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0115 10:37:54.510689   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0115 10:37:54.577797   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0115 10:37:54.577854   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0115 10:37:54.577885   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0115 10:37:54.577945   46387 cache_images.go:92] LoadImages completed in 655.770059ms
	W0115 10:37:54.578019   46387 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0115 10:37:54.578091   46387 ssh_runner.go:195] Run: crio config
	I0115 10:37:51.950759   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.951289   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.951322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.951237   47574 retry.go:31] will retry after 817.063291ms: waiting for machine to come up
	I0115 10:37:52.770506   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:52.771015   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:52.771043   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:52.770977   47574 retry.go:31] will retry after 1.000852987s: waiting for machine to come up
	I0115 10:37:53.774011   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:53.774478   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:53.774518   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:53.774452   47574 retry.go:31] will retry after 1.171113667s: waiting for machine to come up
	I0115 10:37:54.947562   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:54.947925   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:54.947951   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:54.947887   47574 retry.go:31] will retry after 1.982035367s: waiting for machine to come up
	I0115 10:37:54.646104   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:37:54.750728   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:37:54.750754   46387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:37:54.750779   46387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-206509 NodeName:old-k8s-version-206509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 10:37:54.750935   46387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-206509"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-206509
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:37:54.751014   46387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-206509 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:37:54.751063   46387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0115 10:37:54.761568   46387 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:37:54.761645   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:37:54.771892   46387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0115 10:37:54.788678   46387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:37:54.804170   46387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0115 10:37:54.820285   46387 ssh_runner.go:195] Run: grep 192.168.61.70	control-plane.minikube.internal$ /etc/hosts
	I0115 10:37:54.823831   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:54.834806   46387 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509 for IP: 192.168.61.70
	I0115 10:37:54.834838   46387 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:37:54.835023   46387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:37:54.835070   46387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:37:54.835136   46387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/client.key
	I0115 10:37:54.835190   46387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key.99472042
	I0115 10:37:54.835249   46387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key
	I0115 10:37:54.835356   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:37:54.835392   46387 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:37:54.835401   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:37:54.835439   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:37:54.835467   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:37:54.835491   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:37:54.835531   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:54.836204   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:37:54.859160   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 10:37:54.884674   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:37:54.907573   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:37:54.930846   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:37:54.953329   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:37:54.975335   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:37:54.997505   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:37:55.020494   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:37:55.042745   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:37:55.064085   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:37:55.085243   46387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:37:55.101189   46387 ssh_runner.go:195] Run: openssl version
	I0115 10:37:55.106849   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:37:55.118631   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123477   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123545   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.129290   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:37:55.141464   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:37:55.153514   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157901   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157967   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.163557   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:37:55.173419   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:37:55.184850   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189454   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189508   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.194731   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:37:55.205634   46387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:37:55.209881   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:37:55.215521   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:37:55.221031   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:37:55.226730   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:37:55.232566   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:37:55.238251   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:37:55.244098   46387 kubeadm.go:404] StartCluster: {Name:old-k8s-version-206509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:37:55.244188   46387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:37:55.244243   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:37:55.293223   46387 cri.go:89] found id: ""
	I0115 10:37:55.293296   46387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:37:55.305374   46387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:37:55.305403   46387 kubeadm.go:636] restartCluster start
	I0115 10:37:55.305477   46387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:37:55.314925   46387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.316564   46387 kubeconfig.go:92] found "old-k8s-version-206509" server: "https://192.168.61.70:8443"
	I0115 10:37:55.319961   46387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:37:55.329062   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.329148   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.340866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.829433   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.829549   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.843797   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.329336   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.329436   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.343947   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.829507   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.829623   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.843692   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.329438   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.329522   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.341416   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.830063   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.830153   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.844137   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.329648   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.329743   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.342211   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.829792   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.829891   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.842397   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:59.330122   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.330202   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.346667   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.931004   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:56.931428   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:56.931461   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:56.931364   47574 retry.go:31] will retry after 2.358737657s: waiting for machine to come up
	I0115 10:37:59.292322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:59.292784   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:59.292817   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:59.292726   47574 retry.go:31] will retry after 2.808616591s: waiting for machine to come up
	I0115 10:37:59.829162   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.829242   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.844148   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.329799   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.329901   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.345118   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.829706   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.829806   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.845105   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.329598   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.329678   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.341872   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.829350   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.829424   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.843987   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.329874   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.329944   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.342152   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.829617   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.829711   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.841636   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.329206   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.329306   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.341373   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.829987   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.830080   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.842151   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:04.329957   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.330047   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.342133   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.103667   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:02.104098   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:02.104127   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:02.104058   47574 retry.go:31] will retry after 2.823867183s: waiting for machine to come up
	I0115 10:38:04.931219   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:04.931550   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:04.931594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:04.931523   47574 retry.go:31] will retry after 4.042933854s: waiting for machine to come up
	I0115 10:38:04.829477   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.829599   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.841546   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.329351   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:05.329417   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:05.341866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.341892   46387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:05.341900   46387 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:05.341910   46387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:05.342037   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:05.376142   46387 cri.go:89] found id: ""
	I0115 10:38:05.376206   46387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:05.391778   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:05.402262   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:05.402331   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411457   46387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411489   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:05.526442   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.239898   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.449098   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.515862   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.598545   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:06.598653   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.099595   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.599677   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.099492   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.599629   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.627737   46387 api_server.go:72] duration metric: took 2.029196375s to wait for apiserver process to appear ...
	I0115 10:38:08.627766   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:08.627803   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.199201   47063 start.go:369] acquired machines lock for "default-k8s-diff-port-709012" in 3m10.23481312s
	I0115 10:38:10.199261   47063 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:10.199269   47063 fix.go:54] fixHost starting: 
	I0115 10:38:10.199630   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:10.199667   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:10.215225   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0115 10:38:10.215627   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:10.216040   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:10.216068   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:10.216372   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:10.216583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:10.216829   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:10.218454   47063 fix.go:102] recreateIfNeeded on default-k8s-diff-port-709012: state=Stopped err=<nil>
	I0115 10:38:10.218482   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	W0115 10:38:10.218676   47063 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:10.220860   47063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-709012" ...
	I0115 10:38:08.976035   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976545   46584 main.go:141] libmachine: (embed-certs-781270) Found IP for machine: 192.168.72.222
	I0115 10:38:08.976574   46584 main.go:141] libmachine: (embed-certs-781270) Reserving static IP address...
	I0115 10:38:08.976592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has current primary IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976946   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.976980   46584 main.go:141] libmachine: (embed-certs-781270) DBG | skip adding static IP to network mk-embed-certs-781270 - found existing host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"}
	I0115 10:38:08.976997   46584 main.go:141] libmachine: (embed-certs-781270) Reserved static IP address: 192.168.72.222
	I0115 10:38:08.977017   46584 main.go:141] libmachine: (embed-certs-781270) Waiting for SSH to be available...
	I0115 10:38:08.977033   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Getting to WaitForSSH function...
	I0115 10:38:08.979155   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979456   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.979483   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979609   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH client type: external
	I0115 10:38:08.979658   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa (-rw-------)
	I0115 10:38:08.979699   46584 main.go:141] libmachine: (embed-certs-781270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:08.979718   46584 main.go:141] libmachine: (embed-certs-781270) DBG | About to run SSH command:
	I0115 10:38:08.979734   46584 main.go:141] libmachine: (embed-certs-781270) DBG | exit 0
	I0115 10:38:09.082171   46584 main.go:141] libmachine: (embed-certs-781270) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:09.082546   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetConfigRaw
	I0115 10:38:09.083235   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.085481   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.085845   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.085873   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.086115   46584 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/config.json ...
	I0115 10:38:09.086309   46584 machine.go:88] provisioning docker machine ...
	I0115 10:38:09.086331   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.086549   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086714   46584 buildroot.go:166] provisioning hostname "embed-certs-781270"
	I0115 10:38:09.086736   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086884   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.089346   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089702   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.089727   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.090035   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090180   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090319   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.090464   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.090845   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.090862   46584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-781270 && echo "embed-certs-781270" | sudo tee /etc/hostname
	I0115 10:38:09.240609   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-781270
	
	I0115 10:38:09.240643   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.243233   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243586   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.243616   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243764   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.243976   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244292   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.244453   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.244774   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.244800   46584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-781270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-781270/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-781270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:09.388902   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:09.388932   46584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:09.388968   46584 buildroot.go:174] setting up certificates
	I0115 10:38:09.388981   46584 provision.go:83] configureAuth start
	I0115 10:38:09.388998   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.389254   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.392236   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392603   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.392643   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392750   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.395249   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395596   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.395629   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395797   46584 provision.go:138] copyHostCerts
	I0115 10:38:09.395858   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:09.395872   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:09.395939   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:09.396037   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:09.396045   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:09.396067   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:09.396134   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:09.396141   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:09.396159   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:09.396212   46584 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.embed-certs-781270 san=[192.168.72.222 192.168.72.222 localhost 127.0.0.1 minikube embed-certs-781270]
	I0115 10:38:09.457000   46584 provision.go:172] copyRemoteCerts
	I0115 10:38:09.457059   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:09.457081   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.459709   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460074   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.460102   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460356   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.460522   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.460681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.460798   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:09.556211   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:09.578947   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:09.601191   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:09.623814   46584 provision.go:86] duration metric: configureAuth took 234.815643ms
	I0115 10:38:09.623844   46584 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:09.624070   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:09.624157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.626592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.626930   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.626972   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.627141   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.627326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627492   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627607   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.627755   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.628058   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.628086   46584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:09.931727   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:09.931765   46584 machine.go:91] provisioned docker machine in 845.442044ms
	I0115 10:38:09.931777   46584 start.go:300] post-start starting for "embed-certs-781270" (driver="kvm2")
	I0115 10:38:09.931790   46584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:09.931810   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.932100   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:09.932130   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.934487   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934811   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.934836   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934999   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.935160   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.935313   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.935480   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.028971   46584 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:10.032848   46584 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:10.032871   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:10.032955   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:10.033045   46584 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:10.033162   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:10.042133   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:10.064619   46584 start.go:303] post-start completed in 132.827155ms
	I0115 10:38:10.064658   46584 fix.go:56] fixHost completed within 22.492708172s
	I0115 10:38:10.064681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.067323   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067651   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.067675   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067812   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.068037   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068272   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068449   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.068587   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:10.068904   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:10.068919   46584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:10.199025   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315090.148648598
	
	I0115 10:38:10.199045   46584 fix.go:206] guest clock: 1705315090.148648598
	I0115 10:38:10.199053   46584 fix.go:219] Guest: 2024-01-15 10:38:10.148648598 +0000 UTC Remote: 2024-01-15 10:38:10.064662616 +0000 UTC m=+303.401739583 (delta=83.985982ms)
	I0115 10:38:10.199088   46584 fix.go:190] guest clock delta is within tolerance: 83.985982ms
	I0115 10:38:10.199096   46584 start.go:83] releasing machines lock for "embed-certs-781270", held for 22.627192785s
	I0115 10:38:10.199122   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.199368   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:10.201962   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202349   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.202389   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202603   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203417   46584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:10.203461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.203546   46584 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:10.203570   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.206022   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206257   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206371   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206400   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.206673   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206700   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206768   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.206910   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.206911   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.207087   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.207191   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.207335   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.207465   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.327677   46584 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:10.333127   46584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:10.473183   46584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:10.480054   46584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:10.480115   46584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:10.494367   46584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:10.494388   46584 start.go:475] detecting cgroup driver to use...
	I0115 10:38:10.494463   46584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:10.508327   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:10.519950   46584 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:10.520003   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:10.531743   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:10.544980   46584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:10.650002   46584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:10.767145   46584 docker.go:233] disabling docker service ...
	I0115 10:38:10.767214   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:10.782073   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:10.796419   46584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:10.913422   46584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:11.016113   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:11.032638   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:11.053360   46584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:11.053415   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.064008   46584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:11.064067   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.074353   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.084486   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.093962   46584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:11.105487   46584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:11.117411   46584 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:11.117469   46584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:11.133780   46584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:11.145607   46584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:11.257012   46584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:11.437979   46584 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:11.438050   46584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:11.445814   46584 start.go:543] Will wait 60s for crictl version
	I0115 10:38:11.445896   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:38:11.449770   46584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:11.491895   46584 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:11.491985   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.543656   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.609733   46584 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:11.611238   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:11.614594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.614947   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:11.614988   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.615225   46584 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:11.619516   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:11.635101   46584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:11.635170   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:11.675417   46584 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:11.675504   46584 ssh_runner.go:195] Run: which lz4
	I0115 10:38:11.679733   46584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:11.683858   46584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:11.683889   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:13.628977   46387 api_server.go:269] stopped: https://192.168.61.70:8443/healthz: Get "https://192.168.61.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0115 10:38:13.629022   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.222501   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Start
	I0115 10:38:10.222694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring networks are active...
	I0115 10:38:10.223335   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network default is active
	I0115 10:38:10.225164   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network mk-default-k8s-diff-port-709012 is active
	I0115 10:38:10.225189   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Getting domain xml...
	I0115 10:38:10.225201   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Creating domain...
	I0115 10:38:11.529205   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting to get IP...
	I0115 10:38:11.530265   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530808   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530886   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.530786   47689 retry.go:31] will retry after 220.836003ms: waiting for machine to come up
	I0115 10:38:11.753500   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754152   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.754119   47689 retry.go:31] will retry after 288.710195ms: waiting for machine to come up
	I0115 10:38:12.044613   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045149   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045179   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.045065   47689 retry.go:31] will retry after 321.962888ms: waiting for machine to come up
	I0115 10:38:12.368694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369119   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369171   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.369075   47689 retry.go:31] will retry after 457.128837ms: waiting for machine to come up
	I0115 10:38:12.827574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828079   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828108   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.828011   47689 retry.go:31] will retry after 524.042929ms: waiting for machine to come up
	I0115 10:38:13.353733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354288   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354315   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:13.354237   47689 retry.go:31] will retry after 885.937378ms: waiting for machine to come up
	I0115 10:38:14.241653   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242258   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:14.242185   47689 retry.go:31] will retry after 1.168061338s: waiting for machine to come up
	I0115 10:38:14.984346   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:14.984377   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:14.984395   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.129596   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:15.129627   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:15.129650   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.224825   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.224852   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:15.628377   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.666573   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.666642   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:16.128080   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:16.148642   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:38:16.156904   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:38:16.156927   46387 api_server.go:131] duration metric: took 7.529154555s to wait for apiserver health ...
	I0115 10:38:16.156936   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:38:16.156942   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:16.159248   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:13.665699   46584 crio.go:444] Took 1.986003 seconds to copy over tarball
	I0115 10:38:13.665769   46584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:16.702911   46584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.037102789s)
	I0115 10:38:16.702954   46584 crio.go:451] Took 3.037230 seconds to extract the tarball
	I0115 10:38:16.702966   46584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:16.160810   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:16.173072   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:16.205009   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:16.216599   46387 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:16.216637   46387 system_pods.go:61] "coredns-5644d7b6d9-5qcrz" [3fc31c2b-9c3f-4167-8b3f-bbe262591a90] Running
	I0115 10:38:16.216645   46387 system_pods.go:61] "coredns-5644d7b6d9-rgrbc" [1c2c2a33-f329-4cb3-8e05-900a252ceed3] Running
	I0115 10:38:16.216651   46387 system_pods.go:61] "etcd-old-k8s-version-206509" [8c2919cc-4b82-4387-be0d-f3decf4b324b] Running
	I0115 10:38:16.216658   46387 system_pods.go:61] "kube-apiserver-old-k8s-version-206509" [51e63cf2-5728-471d-b447-3f3aa9454ac7] Running
	I0115 10:38:16.216663   46387 system_pods.go:61] "kube-controller-manager-old-k8s-version-206509" [6dec6bf0-ce5d-4f87-8bf7-c774214eb8ea] Running
	I0115 10:38:16.216668   46387 system_pods.go:61] "kube-proxy-w9fdn" [42b28054-8876-4854-a041-62be5688c1c2] Running
	I0115 10:38:16.216675   46387 system_pods.go:61] "kube-scheduler-old-k8s-version-206509" [7a50352c-2129-4de4-84e8-3cb5d8ccd463] Running
	I0115 10:38:16.216681   46387 system_pods.go:61] "storage-provisioner" [f341413b-8261-4a78-9f28-449be173cf19] Running
	I0115 10:38:16.216690   46387 system_pods.go:74] duration metric: took 11.655731ms to wait for pod list to return data ...
	I0115 10:38:16.216703   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:16.220923   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:16.220962   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:16.220978   46387 node_conditions.go:105] duration metric: took 4.267954ms to run NodePressure ...
	I0115 10:38:16.221005   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:16.519042   46387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:16.523772   46387 retry.go:31] will retry after 264.775555ms: kubelet not initialised
	I0115 10:38:17.172203   46387 retry.go:31] will retry after 553.077445ms: kubelet not initialised
	I0115 10:38:18.053202   46387 retry.go:31] will retry after 653.279352ms: kubelet not initialised
	I0115 10:38:18.837753   46387 retry.go:31] will retry after 692.673954ms: kubelet not initialised
	I0115 10:38:19.596427   46387 retry.go:31] will retry after 679.581071ms: kubelet not initialised
	I0115 10:38:15.412204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412706   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412766   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:15.412670   47689 retry.go:31] will retry after 895.041379ms: waiting for machine to come up
	I0115 10:38:16.309188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309764   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:16.309692   47689 retry.go:31] will retry after 1.593821509s: waiting for machine to come up
	I0115 10:38:17.904625   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905131   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905168   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:17.905073   47689 retry.go:31] will retry after 2.002505122s: waiting for machine to come up
	I0115 10:38:16.745093   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:17.184204   46584 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:17.184235   46584 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:17.184325   46584 ssh_runner.go:195] Run: crio config
	I0115 10:38:17.249723   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:17.249748   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:17.249764   46584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:17.249782   46584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.222 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-781270 NodeName:embed-certs-781270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:17.249936   46584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-781270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:17.250027   46584 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-781270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:38:17.250091   46584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:17.262237   46584 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:17.262313   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:17.273370   46584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0115 10:38:17.292789   46584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:17.312254   46584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0115 10:38:17.332121   46584 ssh_runner.go:195] Run: grep 192.168.72.222	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:17.336199   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:17.349009   46584 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270 for IP: 192.168.72.222
	I0115 10:38:17.349047   46584 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:17.349200   46584 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:17.349246   46584 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:17.349316   46584 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/client.key
	I0115 10:38:17.685781   46584 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key.4e007618
	I0115 10:38:17.685874   46584 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key
	I0115 10:38:17.685990   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:17.686022   46584 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:17.686033   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:17.686054   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:17.686085   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:17.686107   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:17.686147   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:17.686866   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:17.713652   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:17.744128   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:17.771998   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:17.796880   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:17.822291   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:17.848429   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:17.874193   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:17.898873   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:17.922742   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:17.945123   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:17.967188   46584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:17.983237   46584 ssh_runner.go:195] Run: openssl version
	I0115 10:38:17.988658   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:17.998141   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002462   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002521   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.008136   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:18.017766   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:18.027687   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032418   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032479   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.038349   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:18.048395   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:18.058675   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063369   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063441   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.068886   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:18.078459   46584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:18.083181   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:18.089264   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:18.095399   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:18.101292   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:18.107113   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:18.112791   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:18.118337   46584 kubeadm.go:404] StartCluster: {Name:embed-certs-781270 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:18.118561   46584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:18.118611   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:18.162363   46584 cri.go:89] found id: ""
	I0115 10:38:18.162454   46584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:18.172261   46584 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:18.172286   46584 kubeadm.go:636] restartCluster start
	I0115 10:38:18.172357   46584 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:18.181043   46584 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.182845   46584 kubeconfig.go:92] found "embed-certs-781270" server: "https://192.168.72.222:8443"
	I0115 10:38:18.186506   46584 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:18.194997   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.195069   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.205576   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.695105   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.695200   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.709836   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.195362   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.195533   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.210585   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.695088   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.695201   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.710436   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.196063   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.196145   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.211948   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.695433   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.695545   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.710981   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.195510   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.195588   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.206769   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.695111   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.695192   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.706765   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.288898   46387 retry.go:31] will retry after 1.97886626s: kubelet not initialised
	I0115 10:38:22.273756   46387 retry.go:31] will retry after 2.35083465s: kubelet not initialised
	I0115 10:38:19.909015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909598   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:19.909539   47689 retry.go:31] will retry after 2.883430325s: waiting for machine to come up
	I0115 10:38:22.794280   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794702   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794729   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:22.794660   47689 retry.go:31] will retry after 3.219865103s: waiting for machine to come up
	I0115 10:38:22.195343   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.195454   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.210740   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:22.695835   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.695900   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.710247   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.195555   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.195633   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.207117   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.695569   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.695632   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.706867   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.195323   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.195428   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.207679   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.695971   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.696049   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.708342   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.195900   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.195994   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.207896   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.695417   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.695490   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.706180   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.195799   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.195890   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.206859   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.695558   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.695648   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.706652   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.630486   46387 retry.go:31] will retry after 5.638904534s: kubelet not initialised
	I0115 10:38:26.016121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016496   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016520   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:26.016463   47689 retry.go:31] will retry after 3.426285557s: waiting for machine to come up
	I0115 10:38:29.447165   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447643   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Found IP for machine: 192.168.39.125
	I0115 10:38:29.447678   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has current primary IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447719   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserving static IP address...
	I0115 10:38:29.448146   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.448172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | skip adding static IP to network mk-default-k8s-diff-port-709012 - found existing host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"}
	I0115 10:38:29.448183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserved static IP address: 192.168.39.125
	I0115 10:38:29.448204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for SSH to be available...
	I0115 10:38:29.448215   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Getting to WaitForSSH function...
	I0115 10:38:29.450376   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450690   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.450715   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH client type: external
	I0115 10:38:29.450867   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa (-rw-------)
	I0115 10:38:29.450899   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:29.450909   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | About to run SSH command:
	I0115 10:38:29.450919   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | exit 0
	I0115 10:38:29.550560   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:29.550940   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetConfigRaw
	I0115 10:38:29.551686   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.554629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555085   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.555117   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555426   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:38:29.555642   47063 machine.go:88] provisioning docker machine ...
	I0115 10:38:29.555672   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:29.555875   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556053   47063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-709012"
	I0115 10:38:29.556076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556217   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.558493   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.558804   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.558835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.559018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.559209   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559363   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.559677   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.560009   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.560028   47063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-709012 && echo "default-k8s-diff-port-709012" | sudo tee /etc/hostname
	I0115 10:38:29.706028   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-709012
	
	I0115 10:38:29.706059   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.708893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.709343   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709409   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.709631   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709789   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709938   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.710121   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.710473   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.710501   47063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-709012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-709012/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-709012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:29.845884   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:29.845916   47063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:29.845938   47063 buildroot.go:174] setting up certificates
	I0115 10:38:29.845953   47063 provision.go:83] configureAuth start
	I0115 10:38:29.845973   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.846293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.849072   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.849558   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849755   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.852196   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852548   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.852574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852664   47063 provision.go:138] copyHostCerts
	I0115 10:38:29.852716   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:29.852726   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:29.852778   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:29.852870   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:29.852877   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:29.852896   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:29.852957   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:29.852964   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:29.852981   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:29.853031   47063 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-709012 san=[192.168.39.125 192.168.39.125 localhost 127.0.0.1 minikube default-k8s-diff-port-709012]
	I0115 10:38:30.777181   46388 start.go:369] acquired machines lock for "no-preload-824502" in 58.676870352s
	I0115 10:38:30.777252   46388 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:30.777263   46388 fix.go:54] fixHost starting: 
	I0115 10:38:30.777697   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:30.777733   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:30.795556   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0115 10:38:30.795931   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:30.796387   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:38:30.796417   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:30.796825   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:30.797001   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:30.797164   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:38:30.798953   46388 fix.go:102] recreateIfNeeded on no-preload-824502: state=Stopped err=<nil>
	I0115 10:38:30.798978   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	W0115 10:38:30.799146   46388 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:30.800981   46388 out.go:177] * Restarting existing kvm2 VM for "no-preload-824502" ...
	I0115 10:38:27.195033   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.195128   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.205968   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:27.695992   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.696075   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.707112   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.195726   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:28.195798   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:28.206794   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.206837   46584 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:28.206846   46584 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:28.206858   46584 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:28.206917   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:28.256399   46584 cri.go:89] found id: ""
	I0115 10:38:28.256468   46584 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:28.272234   46584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:28.281359   46584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:28.281439   46584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290385   46584 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290431   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:28.417681   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.012673   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.212322   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.296161   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.378870   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:29.378965   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.879587   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.379077   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.879281   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:31.379626   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.951966   47063 provision.go:172] copyRemoteCerts
	I0115 10:38:29.952019   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:29.952040   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.954784   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955082   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.955104   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955285   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.955466   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.955649   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.955793   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.057077   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:30.081541   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0115 10:38:30.109962   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:30.140809   47063 provision.go:86] duration metric: configureAuth took 294.836045ms
	I0115 10:38:30.140840   47063 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:30.141071   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:30.141167   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.144633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.144975   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.145015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.145177   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.145378   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145539   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145703   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.145927   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.146287   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.146310   47063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:30.484993   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:30.485022   47063 machine.go:91] provisioned docker machine in 929.358403ms
	I0115 10:38:30.485035   47063 start.go:300] post-start starting for "default-k8s-diff-port-709012" (driver="kvm2")
	I0115 10:38:30.485049   47063 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:30.485067   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.485390   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:30.485431   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.488115   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488473   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.488512   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.488837   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.489018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.489171   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.590174   47063 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:30.594879   47063 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:30.594907   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:30.594974   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:30.595069   47063 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:30.595183   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:30.604525   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:30.631240   47063 start.go:303] post-start completed in 146.190685ms
	I0115 10:38:30.631270   47063 fix.go:56] fixHost completed within 20.431996373s
	I0115 10:38:30.631293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.634188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634544   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.634577   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634807   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.635014   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635185   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.635574   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.636012   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.636032   47063 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:30.777043   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315110.724251584
	
	I0115 10:38:30.777069   47063 fix.go:206] guest clock: 1705315110.724251584
	I0115 10:38:30.777079   47063 fix.go:219] Guest: 2024-01-15 10:38:30.724251584 +0000 UTC Remote: 2024-01-15 10:38:30.631274763 +0000 UTC m=+210.817197544 (delta=92.976821ms)
	I0115 10:38:30.777107   47063 fix.go:190] guest clock delta is within tolerance: 92.976821ms
	I0115 10:38:30.777114   47063 start.go:83] releasing machines lock for "default-k8s-diff-port-709012", held for 20.577876265s
	I0115 10:38:30.777143   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.777406   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:30.780611   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781041   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.781076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781250   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.781876   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782186   47063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:30.782240   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.782295   47063 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:30.782321   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.785597   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786228   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.786255   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786386   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786698   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.786881   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.787023   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.787078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.787095   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.787204   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.787774   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.787930   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.788121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.788345   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.919659   47063 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:30.926237   47063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:31.076313   47063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:31.085010   47063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:31.085087   47063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:31.104237   47063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:31.104265   47063 start.go:475] detecting cgroup driver to use...
	I0115 10:38:31.104331   47063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:31.124044   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:31.139494   47063 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:31.139581   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:31.154894   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:31.172458   47063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:31.307400   47063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:31.496675   47063 docker.go:233] disabling docker service ...
	I0115 10:38:31.496733   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:31.513632   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:31.526228   47063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:31.681556   47063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:31.816489   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:31.831193   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:31.853530   47063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:31.853602   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.864559   47063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:31.864661   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.875384   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.888460   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.904536   47063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:31.915622   47063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:31.929209   47063 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:31.929266   47063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:31.948691   47063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:31.959872   47063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:32.102988   47063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:32.300557   47063 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:32.300632   47063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:32.305636   47063 start.go:543] Will wait 60s for crictl version
	I0115 10:38:32.305691   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:38:32.309883   47063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:32.354459   47063 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:32.354594   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.402443   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.463150   47063 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:30.802324   46388 main.go:141] libmachine: (no-preload-824502) Calling .Start
	I0115 10:38:30.802525   46388 main.go:141] libmachine: (no-preload-824502) Ensuring networks are active...
	I0115 10:38:30.803127   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network default is active
	I0115 10:38:30.803476   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network mk-no-preload-824502 is active
	I0115 10:38:30.803799   46388 main.go:141] libmachine: (no-preload-824502) Getting domain xml...
	I0115 10:38:30.804452   46388 main.go:141] libmachine: (no-preload-824502) Creating domain...
	I0115 10:38:32.173614   46388 main.go:141] libmachine: (no-preload-824502) Waiting to get IP...
	I0115 10:38:32.174650   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.175113   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.175211   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.175106   47808 retry.go:31] will retry after 275.127374ms: waiting for machine to come up
	I0115 10:38:32.451595   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.452150   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.452183   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.452095   47808 retry.go:31] will retry after 258.80121ms: waiting for machine to come up
	I0115 10:38:32.712701   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.713348   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.713531   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.713459   47808 retry.go:31] will retry after 440.227123ms: waiting for machine to come up
	I0115 10:38:33.155845   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.156595   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.156625   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.156500   47808 retry.go:31] will retry after 428.795384ms: waiting for machine to come up
	I0115 10:38:33.587781   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.588169   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.588190   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.588118   47808 retry.go:31] will retry after 720.536787ms: waiting for machine to come up
	I0115 10:38:34.310098   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:34.310640   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:34.310674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:34.310604   47808 retry.go:31] will retry after 841.490959ms: waiting for machine to come up
	I0115 10:38:30.274782   46387 retry.go:31] will retry after 7.853808987s: kubelet not initialised
	I0115 10:38:32.464592   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:32.467583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.467962   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:32.467993   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.468218   47063 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:32.472463   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:32.488399   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:32.488488   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:32.535645   47063 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:32.535776   47063 ssh_runner.go:195] Run: which lz4
	I0115 10:38:32.541468   47063 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:32.547264   47063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:32.547297   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:34.427435   47063 crio.go:444] Took 1.886019 seconds to copy over tarball
	I0115 10:38:34.427510   47063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:31.879639   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.379656   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.408694   46584 api_server.go:72] duration metric: took 3.029823539s to wait for apiserver process to appear ...
	I0115 10:38:32.408737   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:32.408760   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.614020   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:36.614053   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:36.614068   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.687561   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.687606   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.134400   46387 retry.go:31] will retry after 7.988567077s: kubelet not initialised
	I0115 10:38:35.154196   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:35.154644   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:35.154674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:35.154615   47808 retry.go:31] will retry after 1.099346274s: waiting for machine to come up
	I0115 10:38:36.255575   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:36.256111   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:36.256151   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:36.256038   47808 retry.go:31] will retry after 1.294045748s: waiting for machine to come up
	I0115 10:38:37.551734   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:37.552569   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:37.552593   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:37.552527   47808 retry.go:31] will retry after 1.720800907s: waiting for machine to come up
	I0115 10:38:39.275250   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:39.275651   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:39.275684   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:39.275595   47808 retry.go:31] will retry after 1.914509744s: waiting for machine to come up
	I0115 10:38:37.765711   47063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.338169875s)
	I0115 10:38:37.765741   47063 crio.go:451] Took 3.338279 seconds to extract the tarball
	I0115 10:38:37.765753   47063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:37.807016   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:37.858151   47063 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:37.858195   47063 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:37.858295   47063 ssh_runner.go:195] Run: crio config
	I0115 10:38:37.933830   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:37.933851   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:37.933872   47063 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:37.933896   47063 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-709012 NodeName:default-k8s-diff-port-709012 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:37.934040   47063 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-709012"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:37.934132   47063 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-709012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0115 10:38:37.934202   47063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:37.945646   47063 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:37.945728   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:37.957049   47063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0115 10:38:37.978770   47063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:37.995277   47063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0115 10:38:38.012964   47063 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:38.016803   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:38.028708   47063 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012 for IP: 192.168.39.125
	I0115 10:38:38.028740   47063 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:38.028887   47063 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:38.028926   47063 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:38.028988   47063 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/client.key
	I0115 10:38:38.048801   47063 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key.657bd91f
	I0115 10:38:38.048895   47063 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key
	I0115 10:38:38.049019   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:38.049058   47063 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:38.049075   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:38.049110   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:38.049149   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:38.049183   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:38.049241   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:38.049848   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:38.078730   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:38.102069   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:38.124278   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:38.150354   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:38.173703   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:38.201758   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:38.227016   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:38.249876   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:38.271859   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:38.294051   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:38.316673   47063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:38.335128   47063 ssh_runner.go:195] Run: openssl version
	I0115 10:38:38.342574   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:38.355889   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361805   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361871   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.369192   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:38.381493   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:38.391714   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396728   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396787   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.402624   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:38.413957   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:38.425258   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430627   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430697   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.440362   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:38.453323   47063 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:38.458803   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:38.465301   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:38.471897   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:38.478274   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:38.484890   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:38.490909   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:38.496868   47063 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:38.496966   47063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:38.497015   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:38.539389   47063 cri.go:89] found id: ""
	I0115 10:38:38.539475   47063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:38.550998   47063 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:38.551020   47063 kubeadm.go:636] restartCluster start
	I0115 10:38:38.551076   47063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:38.561885   47063 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:38.563439   47063 kubeconfig.go:92] found "default-k8s-diff-port-709012" server: "https://192.168.39.125:8444"
	I0115 10:38:38.566482   47063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:38.576458   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:38.576521   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:38.588702   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.077323   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.077407   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.089885   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.577363   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.577441   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.591111   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:36.909069   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.917556   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.917594   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.409134   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.417305   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.417348   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.909251   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.916788   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.916824   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.409535   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:38.416538   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:38.416572   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.908929   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.863238   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.863279   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.863294   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.869897   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.869922   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.909113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.065422   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:40.065467   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:40.408921   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.414320   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:38:40.424348   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:40.424378   46584 api_server.go:131] duration metric: took 8.015632919s to wait for apiserver health ...
	I0115 10:38:40.424390   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:40.424398   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:40.426615   46584 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:40.427979   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:40.450675   46584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:40.478174   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:40.492540   46584 system_pods.go:59] 9 kube-system pods found
	I0115 10:38:40.492582   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492593   46584 system_pods.go:61] "coredns-5dd5756b68-w4p2z" [87d362df-5c29-4a04-b44f-c502cf6849bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492609   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:40.492619   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:40.492633   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:40.492646   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:40.492658   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:40.492671   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:40.492687   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:40.492700   46584 system_pods.go:74] duration metric: took 14.502202ms to wait for pod list to return data ...
	I0115 10:38:40.492715   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:40.496471   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:40.496504   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:40.496517   46584 node_conditions.go:105] duration metric: took 3.794528ms to run NodePressure ...
	I0115 10:38:40.496538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:40.770732   46584 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777051   46584 kubeadm.go:787] kubelet initialised
	I0115 10:38:40.777118   46584 kubeadm.go:788] duration metric: took 6.307286ms waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777139   46584 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:40.784605   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.798293   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798365   46584 pod_ready.go:81] duration metric: took 13.654765ms waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.798389   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798402   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.807236   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807276   46584 pod_ready.go:81] duration metric: took 8.862426ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.807289   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807297   46584 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.813904   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813932   46584 pod_ready.go:81] duration metric: took 6.62492ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.813944   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813951   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.882407   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882458   46584 pod_ready.go:81] duration metric: took 68.496269ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.882472   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882485   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.282123   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282158   46584 pod_ready.go:81] duration metric: took 399.656962ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.282172   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282181   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.683979   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684007   46584 pod_ready.go:81] duration metric: took 401.816493ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.684017   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684023   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.082465   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082490   46584 pod_ready.go:81] duration metric: took 398.460424ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.082501   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082509   46584 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.484454   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484490   46584 pod_ready.go:81] duration metric: took 401.970108ms waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.484504   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484513   46584 pod_ready.go:38] duration metric: took 1.707353329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:42.484534   46584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:42.499693   46584 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:42.499715   46584 kubeadm.go:640] restartCluster took 24.327423485s
	I0115 10:38:42.499733   46584 kubeadm.go:406] StartCluster complete in 24.381392225s
	I0115 10:38:42.499752   46584 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.499817   46584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:42.502965   46584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.503219   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:42.503253   46584 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:42.503356   46584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-781270"
	I0115 10:38:42.503374   46584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-781270"
	I0115 10:38:42.503383   46584 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-781270"
	I0115 10:38:42.503395   46584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-781270"
	W0115 10:38:42.503402   46584 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:42.503451   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:42.503493   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503504   46584 addons.go:69] Setting metrics-server=true in profile "embed-certs-781270"
	I0115 10:38:42.503520   46584 addons.go:234] Setting addon metrics-server=true in "embed-certs-781270"
	W0115 10:38:42.503533   46584 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:42.503577   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503826   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503850   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503855   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503871   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503895   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503924   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.522809   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0115 10:38:42.523025   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0115 10:38:42.523163   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0115 10:38:42.523260   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523382   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523755   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523861   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.523990   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524323   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524345   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524415   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524585   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524605   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524825   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524992   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525017   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525375   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525412   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525571   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.525747   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.528762   46584 addons.go:234] Setting addon default-storageclass=true in "embed-certs-781270"
	W0115 10:38:42.528781   46584 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:42.528807   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.529117   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.529140   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.544693   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I0115 10:38:42.545013   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0115 10:38:42.545528   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.545628   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.546235   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546265   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546268   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546280   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546650   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546687   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546820   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.546918   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I0115 10:38:42.547068   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.547459   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.548255   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.548269   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.548859   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.549002   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.549393   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.549415   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.549597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.551555   46584 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:42.552918   46584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:42.554551   46584 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.554573   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:42.554591   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.554552   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:42.554648   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:42.554662   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.561284   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.561706   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.561854   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.562023   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.562123   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.562179   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.562229   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.564058   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564432   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.564529   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564798   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.564977   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.565148   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.565282   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.570688   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0115 10:38:42.571242   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.571724   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.571749   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.571989   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.572135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.573685   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.573936   46584 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.573952   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:42.573969   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.576948   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577272   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.577312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577680   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.577866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.577988   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.578134   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.687267   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:42.687293   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:42.707058   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:42.707083   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:42.727026   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.745278   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.777425   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:42.777450   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:42.779528   46584 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:42.832109   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:43.011971   46584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-781270" context rescaled to 1 replicas
	I0115 10:38:43.012022   46584 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:43.014704   46584 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:43.016005   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:44.039814   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.294486297s)
	I0115 10:38:44.039891   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312831152s)
	I0115 10:38:44.039895   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039928   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039946   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040024   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040264   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040283   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040293   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040302   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040412   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040427   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040451   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040613   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040744   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040750   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040755   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040791   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040800   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054113   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.054134   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.054409   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.054454   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054469   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.151470   46584 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.135429651s)
	I0115 10:38:44.151517   46584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:44.151560   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319411531s)
	I0115 10:38:44.151601   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.151626   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.151954   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.151973   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152001   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.152012   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.152312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.152317   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.152328   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152338   46584 addons.go:470] Verifying addon metrics-server=true in "embed-certs-781270"
	I0115 10:38:44.155687   46584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:41.191855   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:41.192271   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:41.192310   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:41.192239   47808 retry.go:31] will retry after 2.364591434s: waiting for machine to come up
	I0115 10:38:43.560150   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:43.560624   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:43.560648   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:43.560581   47808 retry.go:31] will retry after 3.204170036s: waiting for machine to come up
	I0115 10:38:40.076788   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.076875   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.089217   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:40.577351   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.577448   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.593294   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.076625   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.076730   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.092700   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.577413   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.577513   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.592266   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.076755   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.076862   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.090411   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.576920   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.576982   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.589590   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.077312   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.077410   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.089732   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.576781   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.576857   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.592414   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.076854   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.076922   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.089009   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.576614   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.576713   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.592137   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.157450   46584 addons.go:505] enable addons completed in 1.654202196s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:38:46.156830   46584 node_ready.go:58] node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:46.129496   46387 retry.go:31] will retry after 7.881779007s: kubelet not initialised
	I0115 10:38:46.766674   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:46.767050   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:46.767072   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:46.767013   47808 retry.go:31] will retry after 3.09324278s: waiting for machine to come up
	I0115 10:38:45.076819   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.076882   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.092624   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:45.576654   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.576724   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.590306   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.076821   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.076920   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.090883   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.577506   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.577590   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.590379   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.076909   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.076997   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.088742   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.577287   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.577371   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.589014   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.076538   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.076608   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.088956   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.576474   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.576573   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.588122   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.588146   47063 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:48.588153   47063 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:48.588162   47063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:48.588214   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:48.631367   47063 cri.go:89] found id: ""
	I0115 10:38:48.631441   47063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:48.648653   47063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:48.657948   47063 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:48.658017   47063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668103   47063 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668124   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:48.787890   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.559039   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.767002   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.842165   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:47.155176   46584 node_ready.go:49] node "embed-certs-781270" has status "Ready":"True"
	I0115 10:38:47.155200   46584 node_ready.go:38] duration metric: took 3.003671558s waiting for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:47.155212   46584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:47.162248   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:49.169885   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:51.190513   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:49.864075   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864515   46388 main.go:141] libmachine: (no-preload-824502) Found IP for machine: 192.168.50.136
	I0115 10:38:49.864538   46388 main.go:141] libmachine: (no-preload-824502) Reserving static IP address...
	I0115 10:38:49.864554   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has current primary IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864990   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.865034   46388 main.go:141] libmachine: (no-preload-824502) DBG | skip adding static IP to network mk-no-preload-824502 - found existing host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"}
	I0115 10:38:49.865052   46388 main.go:141] libmachine: (no-preload-824502) Reserved static IP address: 192.168.50.136
	I0115 10:38:49.865073   46388 main.go:141] libmachine: (no-preload-824502) Waiting for SSH to be available...
	I0115 10:38:49.865115   46388 main.go:141] libmachine: (no-preload-824502) DBG | Getting to WaitForSSH function...
	I0115 10:38:49.867410   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867671   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.867708   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867864   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH client type: external
	I0115 10:38:49.867924   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa (-rw-------)
	I0115 10:38:49.867961   46388 main.go:141] libmachine: (no-preload-824502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:49.867983   46388 main.go:141] libmachine: (no-preload-824502) DBG | About to run SSH command:
	I0115 10:38:49.867994   46388 main.go:141] libmachine: (no-preload-824502) DBG | exit 0
	I0115 10:38:49.966638   46388 main.go:141] libmachine: (no-preload-824502) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:49.967072   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetConfigRaw
	I0115 10:38:49.967925   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:49.970409   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.970811   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.970846   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.971099   46388 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/config.json ...
	I0115 10:38:49.971300   46388 machine.go:88] provisioning docker machine ...
	I0115 10:38:49.971327   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:49.971561   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971757   46388 buildroot.go:166] provisioning hostname "no-preload-824502"
	I0115 10:38:49.971783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971970   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:49.974279   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974723   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.974758   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974917   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:49.975088   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975247   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975460   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:49.975640   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:49.976081   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:49.976099   46388 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-824502 && echo "no-preload-824502" | sudo tee /etc/hostname
	I0115 10:38:50.121181   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-824502
	
	I0115 10:38:50.121206   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.123821   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124162   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.124194   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124371   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.124588   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124788   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124940   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.125103   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.125410   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.125429   46388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-824502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-824502/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-824502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:50.259649   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:50.259680   46388 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:50.259710   46388 buildroot.go:174] setting up certificates
	I0115 10:38:50.259724   46388 provision.go:83] configureAuth start
	I0115 10:38:50.259736   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:50.260022   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:50.262296   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262683   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.262704   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262848   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.265340   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265715   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.265743   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265885   46388 provision.go:138] copyHostCerts
	I0115 10:38:50.265942   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:50.265953   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:50.266025   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:50.266128   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:50.266143   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:50.266178   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:50.266258   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:50.266268   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:50.266296   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:50.266359   46388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.no-preload-824502 san=[192.168.50.136 192.168.50.136 localhost 127.0.0.1 minikube no-preload-824502]
	I0115 10:38:50.666513   46388 provision.go:172] copyRemoteCerts
	I0115 10:38:50.666584   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:50.666615   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.669658   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670109   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.670162   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670410   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.670632   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.670812   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.671067   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:50.774944   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:50.799533   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:50.824210   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 10:38:50.849191   46388 provision.go:86] duration metric: configureAuth took 589.452836ms
	I0115 10:38:50.849224   46388 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:50.849455   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:38:50.849560   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.852884   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853291   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.853346   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853508   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.853746   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.853936   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.854105   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.854244   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.854708   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.854735   46388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:51.246971   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:51.246997   46388 machine.go:91] provisioned docker machine in 1.275679147s
	I0115 10:38:51.247026   46388 start.go:300] post-start starting for "no-preload-824502" (driver="kvm2")
	I0115 10:38:51.247043   46388 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:51.247063   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.247450   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:51.247481   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.250477   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250751   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.250783   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250951   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.251085   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.251227   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.251308   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.350552   46388 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:51.355893   46388 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:51.355918   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:51.355994   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:51.356096   46388 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:51.356220   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:51.366598   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:51.393765   46388 start.go:303] post-start completed in 146.702407ms
	I0115 10:38:51.393798   46388 fix.go:56] fixHost completed within 20.616533939s
	I0115 10:38:51.393826   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.396990   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397531   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.397563   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397785   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.398006   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398190   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398367   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.398602   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:51.399038   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:51.399057   46388 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:51.532940   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315131.477577825
	
	I0115 10:38:51.532962   46388 fix.go:206] guest clock: 1705315131.477577825
	I0115 10:38:51.532971   46388 fix.go:219] Guest: 2024-01-15 10:38:51.477577825 +0000 UTC Remote: 2024-01-15 10:38:51.393803771 +0000 UTC m=+361.851018624 (delta=83.774054ms)
	I0115 10:38:51.533006   46388 fix.go:190] guest clock delta is within tolerance: 83.774054ms
	I0115 10:38:51.533011   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 20.755785276s
	I0115 10:38:51.533031   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.533296   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:51.536532   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537167   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.537206   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537411   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538058   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538236   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538395   46388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:51.538461   46388 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:51.538485   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.538492   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.541387   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541419   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541791   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541836   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541878   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541952   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.541959   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.542137   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542219   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.542317   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542396   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542477   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.542535   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542697   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.668594   46388 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:51.675328   46388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:51.822660   46388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:51.830242   46388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:51.830318   46388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:51.846032   46388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:51.846067   46388 start.go:475] detecting cgroup driver to use...
	I0115 10:38:51.846147   46388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:51.863608   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:51.875742   46388 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:51.875810   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:51.888307   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:51.902327   46388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:52.027186   46388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:52.170290   46388 docker.go:233] disabling docker service ...
	I0115 10:38:52.170372   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:52.184106   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:52.195719   46388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:52.304630   46388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:52.420312   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:52.434213   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:52.453883   46388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:52.453946   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.464662   46388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:52.464726   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.474291   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.483951   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.493132   46388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:52.503668   46388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:52.512336   46388 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:52.512410   46388 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:52.529602   46388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:52.541735   46388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:52.664696   46388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:52.844980   46388 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:52.845051   46388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:52.850380   46388 start.go:543] Will wait 60s for crictl version
	I0115 10:38:52.850463   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:52.854500   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:52.890488   46388 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:52.890595   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:52.944999   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:53.005494   46388 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0115 10:38:54.017897   46387 retry.go:31] will retry after 11.956919729s: kubelet not initialised
	I0115 10:38:53.006783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:53.009509   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.009903   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:53.009934   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.010135   46388 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:53.014612   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:53.029014   46388 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 10:38:53.029063   46388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:53.073803   46388 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0115 10:38:53.073839   46388 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:38:53.073906   46388 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.073943   46388 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.073979   46388 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.073945   46388 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.073914   46388 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.073932   46388 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.073931   46388 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.073918   46388 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075357   46388 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.075478   46388 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.075515   46388 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.075532   46388 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.075482   46388 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.075483   46388 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.234170   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.248000   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0115 10:38:53.264387   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.289576   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.303961   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.326078   46388 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0115 10:38:53.326132   46388 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.326176   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.331268   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.334628   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.366099   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.426012   46388 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0115 10:38:53.426058   46388 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.426106   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.426316   46388 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0115 10:38:53.426346   46388 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.426377   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505102   46388 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0115 10:38:53.505194   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.505201   46388 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.505286   46388 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0115 10:38:53.505358   46388 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.505410   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505334   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.507596   46388 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0115 10:38:53.507630   46388 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.507674   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.544052   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.544142   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.544078   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.544263   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.544458   46388 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0115 10:38:53.544505   46388 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.544550   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.568682   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0115 10:38:53.568786   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.568808   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.681576   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681671   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0115 10:38:53.681777   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:53.681779   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681918   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.681990   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.682040   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0115 10:38:53.682108   46388 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681996   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.682157   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681927   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0115 10:38:53.682277   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:53.728102   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:53.728204   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:49.944443   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:49.944529   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.445085   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.945395   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.444784   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.944622   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.444886   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.460961   47063 api_server.go:72] duration metric: took 2.516519088s to wait for apiserver process to appear ...
	I0115 10:38:52.460980   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:52.460996   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:52.461498   47063 api_server.go:269] stopped: https://192.168.39.125:8444/healthz: Get "https://192.168.39.125:8444/healthz": dial tcp 192.168.39.125:8444: connect: connection refused
	I0115 10:38:52.961968   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:53.672555   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:55.685156   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:56.172493   46584 pod_ready.go:92] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.172521   46584 pod_ready.go:81] duration metric: took 9.010249071s waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.172534   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.178057   46584 pod_ready.go:97] error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178080   46584 pod_ready.go:81] duration metric: took 5.538491ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:56.178092   46584 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178100   46584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185048   46584 pod_ready.go:92] pod "etcd-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.185071   46584 pod_ready.go:81] duration metric: took 6.962528ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185082   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190244   46584 pod_ready.go:92] pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.190263   46584 pod_ready.go:81] duration metric: took 5.173778ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190275   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196537   46584 pod_ready.go:92] pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.196555   46584 pod_ready.go:81] duration metric: took 6.272551ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196566   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367735   46584 pod_ready.go:92] pod "kube-proxy-jqgfc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.367766   46584 pod_ready.go:81] duration metric: took 171.191874ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367779   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.209201   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.209232   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.209247   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.283870   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.283914   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.461166   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.476935   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.476968   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:56.961147   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.966917   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.966949   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:57.461505   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:57.467290   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:38:57.482673   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:57.482709   47063 api_server.go:131] duration metric: took 5.021721974s to wait for apiserver health ...
	I0115 10:38:57.482721   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:57.482729   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:57.484809   47063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:57.486522   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:57.503036   47063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:57.523094   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:57.539289   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:57.539332   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:57.539342   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:57.539353   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:57.539361   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:57.539367   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:57.539372   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:57.539378   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:57.539392   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:57.539400   47063 system_pods.go:74] duration metric: took 16.288236ms to wait for pod list to return data ...
	I0115 10:38:57.539415   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:57.547016   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:57.547043   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:57.547053   47063 node_conditions.go:105] duration metric: took 7.632954ms to run NodePressure ...
	I0115 10:38:57.547069   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:57.838097   47063 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847919   47063 kubeadm.go:787] kubelet initialised
	I0115 10:38:57.847945   47063 kubeadm.go:788] duration metric: took 9.818012ms waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847960   47063 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:57.860753   47063 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.866623   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866666   47063 pod_ready.go:81] duration metric: took 5.881593ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.866679   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866687   47063 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.873742   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873772   47063 pod_ready.go:81] duration metric: took 7.070689ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.873787   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873795   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.881283   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881313   47063 pod_ready.go:81] duration metric: took 7.502343ms waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.881328   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881335   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.927473   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927504   47063 pod_ready.go:81] duration metric: took 46.159848ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.927516   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927523   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.329002   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329029   47063 pod_ready.go:81] duration metric: took 401.499694ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.329039   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329046   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.727362   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727394   47063 pod_ready.go:81] duration metric: took 398.336577ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.727411   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727420   47063 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:59.138162   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138195   47063 pod_ready.go:81] duration metric: took 410.766568ms waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:59.138207   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138214   47063 pod_ready.go:38] duration metric: took 1.290244752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:59.138232   47063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:59.173438   47063 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:59.173463   47063 kubeadm.go:640] restartCluster took 20.622435902s
	I0115 10:38:59.173473   47063 kubeadm.go:406] StartCluster complete in 20.676611158s
	I0115 10:38:59.173494   47063 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.173598   47063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:59.176160   47063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.176389   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:59.176558   47063 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:59.176645   47063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176652   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:59.176680   47063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.176696   47063 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:59.176706   47063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176725   47063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-709012"
	I0115 10:38:59.176768   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177130   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177163   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177188   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177220   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177254   47063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.177288   47063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.177305   47063 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:59.177390   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177796   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177849   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.182815   47063 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-709012" context rescaled to 1 replicas
	I0115 10:38:59.182849   47063 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:59.184762   47063 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:59.186249   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:59.196870   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0115 10:38:59.197111   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37331
	I0115 10:38:59.197493   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.197595   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.198074   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198096   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198236   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198264   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198410   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.198620   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.198634   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.199252   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.199278   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.202438   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0115 10:38:59.202957   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.203462   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.203489   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.203829   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.204271   47063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.204295   47063 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:59.204322   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.204406   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204434   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.204728   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204768   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.220973   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0115 10:38:59.221383   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.221873   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.221898   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.222330   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.222537   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.223337   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0115 10:38:59.223746   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35993
	I0115 10:38:59.224454   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.224557   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.227071   47063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:59.225090   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.225234   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.228609   47063 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.228624   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:59.228638   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.228668   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229046   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.229064   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229415   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229515   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229671   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.230070   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.230093   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.232470   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.233532   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.235985   47063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:56.206357   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.524032218s)
	I0115 10:38:56.206399   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0115 10:38:56.206444   46388 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: (2.52429359s)
	I0115 10:38:56.206494   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206580   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.524566038s)
	I0115 10:38:56.206594   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206609   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206684   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.52488513s)
	I0115 10:38:56.206806   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0115 10:38:56.206718   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.524535788s)
	I0115 10:38:56.206824   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0115 10:38:56.206756   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.524930105s)
	I0115 10:38:56.206843   46388 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.206863   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206780   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.478563083s)
	I0115 10:38:56.206890   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206908   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.986404   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0115 10:38:56.986480   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0115 10:38:56.986513   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:56.986555   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:59.063376   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.076785591s)
	I0115 10:38:59.063421   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0115 10:38:59.063449   46388 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.063494   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.234530   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.234543   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.237273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.237334   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:59.237349   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:59.237367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.237458   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.237624   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.237776   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.240471   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242356   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.242442   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.242483   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242538   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.245246   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.245394   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.251844   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I0115 10:38:59.252344   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.252855   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.252876   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.253245   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.253439   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.255055   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.255299   47063 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.255315   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:59.255331   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.258732   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259370   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.259408   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259554   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.259739   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.259915   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.260060   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.380593   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:59.380623   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:59.387602   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.409765   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.434624   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:59.434655   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:59.514744   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:59.514778   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:59.528401   47063 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:59.528428   47063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:38:59.552331   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:00.775084   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.365286728s)
	I0115 10:39:00.775119   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387483878s)
	I0115 10:39:00.775251   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775268   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775195   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775319   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775697   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775737   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775778   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.775791   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.775805   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775818   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.776009   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.776030   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778922   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.778939   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778949   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.778959   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.779172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.780377   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.780395   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.787873   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.787893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.788142   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.788161   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.882725   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330338587s)
	I0115 10:39:00.882775   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.882792   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883118   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883140   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883144   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.883150   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.883166   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883494   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883513   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883523   47063 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-709012"
	I0115 10:39:00.887782   47063 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:56.767524   46584 pod_ready.go:92] pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.767555   46584 pod_ready.go:81] duration metric: took 399.766724ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.767569   46584 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.776515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:00.777313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:03.358192   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.294671295s)
	I0115 10:39:03.358221   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0115 10:39:03.358249   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:03.358296   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:00.889422   47063 addons.go:505] enable addons completed in 1.71286662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:01.533332   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.534081   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.274613   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.277132   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.981700   46387 kubeadm.go:787] kubelet initialised
	I0115 10:39:05.981726   46387 kubeadm.go:788] duration metric: took 49.462651853s waiting for restarted kubelet to initialise ...
	I0115 10:39:05.981737   46387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:05.987142   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993872   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.993896   46387 pod_ready.go:81] duration metric: took 6.725677ms waiting for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993920   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999103   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.999133   46387 pod_ready.go:81] duration metric: took 5.204706ms waiting for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999148   46387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004449   46387 pod_ready.go:92] pod "etcd-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.004472   46387 pod_ready.go:81] duration metric: took 5.315188ms waiting for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004484   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010187   46387 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.010209   46387 pod_ready.go:81] duration metric: took 5.716918ms waiting for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010221   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380715   46387 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.380742   46387 pod_ready.go:81] duration metric: took 370.513306ms waiting for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380756   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780865   46387 pod_ready.go:92] pod "kube-proxy-w9fdn" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.780887   46387 pod_ready.go:81] duration metric: took 400.122851ms waiting for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780899   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179755   46387 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.179785   46387 pod_ready.go:81] duration metric: took 398.879027ms waiting for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179798   46387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.188315   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.429866   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.071542398s)
	I0115 10:39:05.429896   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0115 10:39:05.429927   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:05.429988   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:08.115120   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.685106851s)
	I0115 10:39:08.115147   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0115 10:39:08.115179   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:08.115226   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:05.540836   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:07.032884   47063 node_ready.go:49] node "default-k8s-diff-port-709012" has status "Ready":"True"
	I0115 10:39:07.032914   47063 node_ready.go:38] duration metric: took 7.504464113s waiting for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:39:07.032928   47063 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:07.042672   47063 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048131   47063 pod_ready.go:92] pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.048156   47063 pod_ready.go:81] duration metric: took 5.456337ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048167   47063 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053470   47063 pod_ready.go:92] pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.053492   47063 pod_ready.go:81] duration metric: took 5.316882ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053504   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.061828   47063 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.562201   47063 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.562235   47063 pod_ready.go:81] duration metric: took 2.508719163s waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.562248   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571588   47063 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.571614   47063 pod_ready.go:81] duration metric: took 9.356396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571628   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580269   47063 pod_ready.go:92] pod "kube-proxy-d8lcq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.580291   47063 pod_ready.go:81] duration metric: took 8.654269ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580305   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833621   47063 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.833646   47063 pod_ready.go:81] duration metric: took 253.332081ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833658   47063 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.776707   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.777515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.687740   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.187565   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.092236   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.976986955s)
	I0115 10:39:11.092266   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0115 10:39:11.092290   46388 cache_images.go:123] Successfully loaded all cached images
	I0115 10:39:11.092296   46388 cache_images.go:92] LoadImages completed in 18.018443053s
	I0115 10:39:11.092373   46388 ssh_runner.go:195] Run: crio config
	I0115 10:39:11.155014   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:11.155036   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:11.155056   46388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:39:11.155074   46388 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-824502 NodeName:no-preload-824502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:39:11.155203   46388 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-824502"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:39:11.155265   46388 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-824502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:39:11.155316   46388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0115 10:39:11.165512   46388 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:39:11.165586   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:39:11.175288   46388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0115 10:39:11.192730   46388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0115 10:39:11.209483   46388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0115 10:39:11.228296   46388 ssh_runner.go:195] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0115 10:39:11.232471   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:39:11.245041   46388 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502 for IP: 192.168.50.136
	I0115 10:39:11.245106   46388 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:11.245298   46388 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:39:11.245364   46388 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:39:11.245456   46388 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.key
	I0115 10:39:11.245551   46388 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key.cb5546de
	I0115 10:39:11.245617   46388 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key
	I0115 10:39:11.245769   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:39:11.245808   46388 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:39:11.245823   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:39:11.245855   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:39:11.245895   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:39:11.245937   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:39:11.246018   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:39:11.246987   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:39:11.272058   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:39:11.295425   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:39:11.320271   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:39:11.347161   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:39:11.372529   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:39:11.396765   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:39:11.419507   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:39:11.441814   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:39:11.463306   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:39:11.485830   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:39:11.510306   46388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:39:11.527095   46388 ssh_runner.go:195] Run: openssl version
	I0115 10:39:11.532483   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:39:11.543447   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548266   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548330   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.554228   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:39:11.564891   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:39:11.574809   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579217   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579257   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.584745   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:39:11.596117   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:39:11.606888   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611567   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611632   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.617307   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:39:11.627893   46388 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:39:11.632530   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:39:11.638562   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:39:11.644605   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:39:11.650917   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:39:11.656970   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:39:11.662948   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:39:11.669010   46388 kubeadm.go:404] StartCluster: {Name:no-preload-824502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:39:11.669093   46388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:39:11.669144   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:11.707521   46388 cri.go:89] found id: ""
	I0115 10:39:11.707594   46388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:39:11.719407   46388 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:39:11.719445   46388 kubeadm.go:636] restartCluster start
	I0115 10:39:11.719511   46388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:39:11.729609   46388 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.730839   46388 kubeconfig.go:92] found "no-preload-824502" server: "https://192.168.50.136:8443"
	I0115 10:39:11.733782   46388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:39:11.744363   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:11.744437   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:11.757697   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.245289   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.245389   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.258680   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.745234   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.745334   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.757934   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.244459   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.244549   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.256860   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.745400   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.745486   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.759185   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:14.244696   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.244774   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.257692   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.842044   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.339850   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.779637   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.278260   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.187668   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.187834   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.745104   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.745191   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.757723   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.244680   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.244760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.259042   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.744599   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.744692   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.761497   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.245412   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.245507   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.260040   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.744664   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.744752   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.757209   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.244739   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.244826   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.257922   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.744411   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.744528   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.756304   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.244475   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.244580   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.257372   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.744977   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.745072   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.758201   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:19.244832   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.244906   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.257468   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.342438   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.845282   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.776399   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.276057   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:20.686392   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:22.687613   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.745014   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.745076   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.757274   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.245246   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.245307   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.257735   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.745333   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.745422   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.757945   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.245022   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.245112   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.257351   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.744980   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.745057   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.756073   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.756099   46388 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:39:21.756107   46388 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:39:21.756116   46388 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:39:21.756167   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:21.800172   46388 cri.go:89] found id: ""
	I0115 10:39:21.800229   46388 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:39:21.815607   46388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:39:21.826460   46388 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:39:21.826525   46388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835735   46388 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835758   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:21.963603   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.673572   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.882139   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.975846   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:23.061284   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:39:23.061391   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:23.561760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.061736   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.562127   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:21.340520   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.340897   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:21.776123   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.776196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.777003   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:24.688163   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.187371   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.061818   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.561582   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.584837   46388 api_server.go:72] duration metric: took 2.523550669s to wait for apiserver process to appear ...
	I0115 10:39:25.584868   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:39:25.584893   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.585385   46388 api_server.go:269] stopped: https://192.168.50.136:8443/healthz: Get "https://192.168.50.136:8443/healthz": dial tcp 192.168.50.136:8443: connect: connection refused
	I0115 10:39:26.085248   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.546970   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.547007   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.547026   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.597433   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.597466   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.597482   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.342652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.343320   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.840652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.625537   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:29.625587   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.085614   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.091715   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.091745   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.585298   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.591889   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.591919   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:31.086028   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:31.091297   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:39:31.099702   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:39:31.099726   46388 api_server.go:131] duration metric: took 5.514851771s to wait for apiserver health ...
	I0115 10:39:31.099735   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:31.099741   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:31.102193   46388 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:39:28.275539   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:30.276634   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.104002   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:39:31.130562   46388 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:39:31.163222   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:39:31.186170   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:39:31.186201   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:39:31.186212   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:39:31.186222   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:39:31.186231   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:39:31.186242   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:39:31.186252   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:39:31.186263   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:39:31.186276   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:39:31.186284   46388 system_pods.go:74] duration metric: took 23.040188ms to wait for pod list to return data ...
	I0115 10:39:31.186292   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:39:31.215529   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:39:31.215567   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:39:31.215584   46388 node_conditions.go:105] duration metric: took 29.283674ms to run NodePressure ...
	I0115 10:39:31.215615   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:31.584238   46388 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590655   46388 kubeadm.go:787] kubelet initialised
	I0115 10:39:31.590679   46388 kubeadm.go:788] duration metric: took 6.418412ms waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590688   46388 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:31.603892   46388 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.612449   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612484   46388 pod_ready.go:81] duration metric: took 8.567896ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.612497   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612507   46388 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.622651   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622678   46388 pod_ready.go:81] duration metric: took 10.161967ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.622690   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622698   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.633893   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633917   46388 pod_ready.go:81] duration metric: took 11.210807ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.633929   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633937   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.639395   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639423   46388 pod_ready.go:81] duration metric: took 5.474128ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.639434   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639442   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.989202   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989242   46388 pod_ready.go:81] duration metric: took 349.786667ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.989255   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989264   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.387200   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387227   46388 pod_ready.go:81] duration metric: took 397.955919ms waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.387236   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387243   46388 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.789213   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789235   46388 pod_ready.go:81] duration metric: took 401.985079ms waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.789245   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789252   46388 pod_ready.go:38] duration metric: took 1.198551697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:32.789271   46388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:39:32.802883   46388 ops.go:34] apiserver oom_adj: -16
	I0115 10:39:32.802901   46388 kubeadm.go:640] restartCluster took 21.083448836s
	I0115 10:39:32.802908   46388 kubeadm.go:406] StartCluster complete in 21.133905255s
	I0115 10:39:32.802921   46388 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.802997   46388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:39:32.804628   46388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.804880   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:39:32.804990   46388 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:39:32.805075   46388 addons.go:69] Setting storage-provisioner=true in profile "no-preload-824502"
	I0115 10:39:32.805094   46388 addons.go:234] Setting addon storage-provisioner=true in "no-preload-824502"
	W0115 10:39:32.805102   46388 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:39:32.805108   46388 addons.go:69] Setting default-storageclass=true in profile "no-preload-824502"
	I0115 10:39:32.805128   46388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-824502"
	I0115 10:39:32.805128   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:39:32.805137   46388 addons.go:69] Setting metrics-server=true in profile "no-preload-824502"
	I0115 10:39:32.805165   46388 addons.go:234] Setting addon metrics-server=true in "no-preload-824502"
	I0115 10:39:32.805172   46388 host.go:66] Checking if "no-preload-824502" exists ...
	W0115 10:39:32.805175   46388 addons.go:243] addon metrics-server should already be in state true
	I0115 10:39:32.805219   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.805564   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805565   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805597   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805602   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805616   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805698   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.809596   46388 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-824502" context rescaled to 1 replicas
	I0115 10:39:32.809632   46388 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:39:32.812135   46388 out.go:177] * Verifying Kubernetes components...
	I0115 10:39:32.814392   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:39:32.823244   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	I0115 10:39:32.823758   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.823864   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0115 10:39:32.824287   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824306   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.824351   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.824693   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.824816   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.824833   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824857   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.825184   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.825778   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.825823   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.827847   46388 addons.go:234] Setting addon default-storageclass=true in "no-preload-824502"
	W0115 10:39:32.827864   46388 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:39:32.827883   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.828242   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.828286   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.838537   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0115 10:39:32.839162   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.839727   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.839747   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.841293   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.841862   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.841899   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.844309   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0115 10:39:32.844407   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32997
	I0115 10:39:32.844654   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.844941   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.845132   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845156   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.845712   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.845881   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845894   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.846316   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.846347   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.846910   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.847189   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.849126   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.851699   46388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:39:32.853268   46388 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:32.853284   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:39:32.853305   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.855997   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856372   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.856394   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856569   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.856716   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.856853   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.856975   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.861396   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I0115 10:39:32.861893   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.862379   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.862409   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.862874   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.863050   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.864195   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37983
	I0115 10:39:32.864480   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.866714   46388 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:39:32.864849   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.868242   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:39:32.868256   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:39:32.868274   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.868596   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.868613   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.869057   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.869306   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.870918   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.871163   46388 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:32.871177   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:39:32.871192   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.871252   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871670   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.871691   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871958   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.872127   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.872288   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.872463   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.874381   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875287   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.875314   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875478   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.875624   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.875786   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.875893   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.982357   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:33.059016   46388 node_ready.go:35] waiting up to 6m0s for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:33.059259   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:39:33.059281   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:39:33.060796   46388 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:39:33.060983   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:33.110608   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:39:33.110633   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:39:33.154857   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:33.154886   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:39:33.198495   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:34.178167   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.117123302s)
	I0115 10:39:34.178220   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178234   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178312   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.19592253s)
	I0115 10:39:34.178359   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178372   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178649   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178669   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178687   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178712   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178723   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178735   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178691   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178800   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178811   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178823   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178982   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179001   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.179003   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179040   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179057   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179075   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.186855   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.186875   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.187114   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.187135   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.187154   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.293778   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095231157s)
	I0115 10:39:34.293837   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.293861   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294161   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294184   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294194   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.294203   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294451   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294475   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294487   46388 addons.go:470] Verifying addon metrics-server=true in "no-preload-824502"
	I0115 10:39:34.296653   46388 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:39:29.687541   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.689881   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.692248   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.298179   46388 addons.go:505] enable addons completed in 1.493195038s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:31.842086   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.843601   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:32.775651   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.778997   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:36.186700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.688932   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:35.063999   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:37.068802   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:39.564287   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:36.341901   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.344615   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:37.278252   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:39.780035   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:41.186854   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.687410   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:40.063481   46388 node_ready.go:49] node "no-preload-824502" has status "Ready":"True"
	I0115 10:39:40.063509   46388 node_ready.go:38] duration metric: took 7.00445832s waiting for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:40.063521   46388 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:40.069733   46388 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077511   46388 pod_ready.go:92] pod "coredns-76f75df574-ft2wt" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.077539   46388 pod_ready.go:81] duration metric: took 7.783253ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077549   46388 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082665   46388 pod_ready.go:92] pod "etcd-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.082693   46388 pod_ready.go:81] duration metric: took 5.137636ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082704   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087534   46388 pod_ready.go:92] pod "kube-apiserver-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.087552   46388 pod_ready.go:81] duration metric: took 4.840583ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087563   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092447   46388 pod_ready.go:92] pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.092473   46388 pod_ready.go:81] duration metric: took 4.90114ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092493   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464047   46388 pod_ready.go:92] pod "kube-proxy-nlk2h" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.464065   46388 pod_ready.go:81] duration metric: took 371.565815ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464075   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:42.472255   46388 pod_ready.go:102] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.471011   46388 pod_ready.go:92] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:43.471033   46388 pod_ready.go:81] duration metric: took 3.006951578s waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:43.471045   46388 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.841668   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.842151   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.277636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:44.787510   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:46.187891   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:48.687578   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.478255   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.978120   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.340455   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.341486   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.840829   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.275430   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.776946   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.188236   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.686748   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.980682   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:52.479488   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.840971   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.841513   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.778023   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.275602   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:55.687892   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.186665   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.978059   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.978213   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.978881   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.341772   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.841021   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.775700   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:59.274671   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:01.280895   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.186976   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:02.688712   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.978942   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.482480   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.841912   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.340823   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.775015   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.776664   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.185744   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.185877   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:09.187192   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.979141   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.479235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.840997   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.842100   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.278110   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.775278   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:11.686672   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.187037   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.978475   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.978621   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.346343   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.841357   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.841981   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:13.278313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:15.777340   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.188343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.687840   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.979177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.981550   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.478364   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:17.340973   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.341317   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.275525   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:20.277493   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.187342   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.693743   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.480386   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.481947   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.341650   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.841949   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:22.777674   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.273859   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:26.186846   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.188206   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.978266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.979824   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.842629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.341954   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.274109   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:29.275517   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:31.277396   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.688520   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.187343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.478712   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:32.978549   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.843559   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.340435   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.278639   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.777051   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.186611   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:34.978720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:37.488790   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.841994   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.340074   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.278319   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.776206   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:39.978911   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.478331   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.187741   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.687320   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.340766   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.341909   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.843116   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.777726   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.777953   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:45.188685   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.687270   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.978841   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.477932   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.478482   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.340237   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.341936   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.275247   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.777753   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.688548   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.187385   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.188261   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.478562   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.978677   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.840537   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.842188   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.278594   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.774847   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.687614   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:59.186203   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.479325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.979266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.340295   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.342857   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.776968   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.777421   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.278730   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.186645   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.187583   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.478127   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.478816   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:00.841474   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.340255   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.775648   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.779261   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.687557   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.688081   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.979671   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.478240   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.345230   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.841561   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:09.841629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.275641   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.276466   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.187771   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.688852   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.478832   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.978808   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:11.841717   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.341355   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.775133   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.274677   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.186001   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.186387   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.186931   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.979099   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.478539   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:16.841294   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:18.842244   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.776623   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:20.274196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.187095   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.689700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.978471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.478169   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.479319   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.341851   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.343663   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.275134   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.276420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.185307   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.186549   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.978977   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.979239   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:25.840539   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:27.840819   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:29.842580   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.775069   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.775244   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.275239   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:30.187482   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.687454   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.478330   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.479265   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.340974   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.342201   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.275561   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.775652   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.687487   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.689628   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:39.186244   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.979235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.981609   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.342452   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:38.841213   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.775893   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.274573   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.186313   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.687042   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.478993   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.479953   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.341359   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.840325   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.775636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.275821   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.687911   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.186598   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:44.977946   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:46.980471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.477591   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.841849   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.341443   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:47.276441   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.775182   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.687273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.187451   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.480325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.979440   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.841657   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.341257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.776199   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:54.274920   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.188121   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.191970   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.478903   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:58.979288   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.341479   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.841144   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.841215   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.775625   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.276127   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.687860   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:02.188506   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.480582   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:03.977715   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.841608   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.340546   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.775220   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.274050   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.277327   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.688269   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.187187   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:05.977760   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.978356   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.340629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.341333   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.775130   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.776410   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.686836   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.187035   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.187814   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.978478   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.477854   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.477883   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.341625   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.841300   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.842745   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:13.276029   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:15.774949   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.686998   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.689531   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.478177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.978154   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.844053   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:19.339915   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:17.775988   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:20.276213   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.187144   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.188273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.479275   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.977720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.342019   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.343747   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:22.775222   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.274922   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.186701   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.979093   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.478022   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.843596   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.340257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:27.275420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:29.275918   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:31.276702   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.186796   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.686406   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.478933   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.978757   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.341780   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.842117   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:33.774432   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.775822   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:34.687304   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:36.687850   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.187956   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.478261   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.978198   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.341314   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.840626   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.842475   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:38.275042   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:40.774892   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.686479   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.688800   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.980119   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:42.478070   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.478709   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.844661   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.340617   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.278574   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:45.775324   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.185760   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.186399   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.479381   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.979086   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.842369   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:49.341153   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:47.776338   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.275329   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.187219   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.687370   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.479573   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.978568   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.840818   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.842279   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.776812   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:54.780747   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.187111   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:57.187263   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.478479   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.977687   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.846775   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.340913   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.768584   46584 pod_ready.go:81] duration metric: took 4m0.001000825s waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:42:56.768615   46584 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:42:56.768623   46584 pod_ready.go:38] duration metric: took 4m9.613401399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:42:56.768641   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:42:56.768686   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:42:56.768739   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:42:56.842276   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:56.842298   46584 cri.go:89] found id: ""
	I0115 10:42:56.842309   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:42:56.842361   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.847060   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:42:56.847118   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:42:56.887059   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:56.887092   46584 cri.go:89] found id: ""
	I0115 10:42:56.887100   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:42:56.887158   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.893238   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:42:56.893289   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:42:56.933564   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:56.933593   46584 cri.go:89] found id: ""
	I0115 10:42:56.933603   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:42:56.933657   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.937882   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:42:56.937958   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:42:56.980953   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:56.980989   46584 cri.go:89] found id: ""
	I0115 10:42:56.980999   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:42:56.981051   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.985008   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:42:56.985058   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:42:57.026275   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:57.026305   46584 cri.go:89] found id: ""
	I0115 10:42:57.026315   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:42:57.026373   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.030799   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:42:57.030885   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:42:57.071391   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:42:57.071416   46584 cri.go:89] found id: ""
	I0115 10:42:57.071424   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:42:57.071485   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.076203   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:42:57.076254   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:42:57.119035   46584 cri.go:89] found id: ""
	I0115 10:42:57.119062   46584 logs.go:284] 0 containers: []
	W0115 10:42:57.119069   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:42:57.119074   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:42:57.119129   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:42:57.167335   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:57.167355   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:57.167360   46584 cri.go:89] found id: ""
	I0115 10:42:57.167367   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:42:57.167411   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.171919   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.176255   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:42:57.176284   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:42:57.328501   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:42:57.328538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:57.390279   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:42:57.390309   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:57.886607   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:42:57.886645   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:42:57.937391   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:42:57.937420   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:42:58.001313   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:42:58.001348   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:42:58.016772   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:42:58.016804   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:58.060489   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:42:58.060516   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:58.102993   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:42:58.103043   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:58.140732   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:42:58.140764   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:58.191891   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:42:58.191927   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:58.235836   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:42:58.235861   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:58.277424   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:42:58.277465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:00.844771   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:00.862922   46584 api_server.go:72] duration metric: took 4m17.850865s to wait for apiserver process to appear ...
	I0115 10:43:00.862946   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:00.862992   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:00.863055   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:00.909986   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:00.910013   46584 cri.go:89] found id: ""
	I0115 10:43:00.910020   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:00.910066   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.915553   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:00.915634   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:00.969923   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:00.969951   46584 cri.go:89] found id: ""
	I0115 10:43:00.969961   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:00.970021   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.974739   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:00.974805   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:01.024283   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.024305   46584 cri.go:89] found id: ""
	I0115 10:43:01.024314   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:01.024366   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.029325   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:01.029388   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:01.070719   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.070746   46584 cri.go:89] found id: ""
	I0115 10:43:01.070755   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:01.070806   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.074906   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:01.074969   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:01.111715   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.111747   46584 cri.go:89] found id: ""
	I0115 10:43:01.111756   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:01.111805   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.116173   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:01.116225   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:01.157760   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.157791   46584 cri.go:89] found id: ""
	I0115 10:43:01.157802   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:01.157866   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.161944   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:01.162010   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:01.201888   46584 cri.go:89] found id: ""
	I0115 10:43:01.201915   46584 logs.go:284] 0 containers: []
	W0115 10:43:01.201925   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:01.201932   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:01.201990   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:01.244319   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.244346   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.244352   46584 cri.go:89] found id: ""
	I0115 10:43:01.244361   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:01.244454   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.248831   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.253617   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:01.253643   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:01.309426   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:01.309465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.346755   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:01.346789   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.385238   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:01.385266   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.423907   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:01.423941   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.480867   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:01.480902   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:01.538367   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:01.538403   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.580240   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:01.580273   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.622561   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:01.622602   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:01.675436   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:01.675463   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:59.687714   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.186463   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.982902   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:03.478178   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.840619   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.841154   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:04.842905   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.080545   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:02.080578   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:02.144713   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:02.144756   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:02.160120   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:02.160147   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:04.776113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:43:04.782741   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:43:04.783959   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:04.783979   46584 api_server.go:131] duration metric: took 3.92102734s to wait for apiserver health ...
	I0115 10:43:04.783986   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:04.784019   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:04.784071   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:04.832660   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:04.832685   46584 cri.go:89] found id: ""
	I0115 10:43:04.832695   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:04.832750   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.836959   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:04.837009   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:04.878083   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:04.878103   46584 cri.go:89] found id: ""
	I0115 10:43:04.878110   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:04.878160   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.882581   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:04.882642   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:04.927778   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:04.927798   46584 cri.go:89] found id: ""
	I0115 10:43:04.927805   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:04.927848   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.932822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:04.932891   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:04.975930   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:04.975955   46584 cri.go:89] found id: ""
	I0115 10:43:04.975965   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:04.976010   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.980744   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:04.980803   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:05.024300   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.024325   46584 cri.go:89] found id: ""
	I0115 10:43:05.024332   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:05.024383   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.029091   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:05.029159   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:05.081239   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.081264   46584 cri.go:89] found id: ""
	I0115 10:43:05.081273   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:05.081332   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.085822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:05.085879   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:05.126839   46584 cri.go:89] found id: ""
	I0115 10:43:05.126884   46584 logs.go:284] 0 containers: []
	W0115 10:43:05.126896   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:05.126903   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:05.126963   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:05.168241   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.168269   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.168276   46584 cri.go:89] found id: ""
	I0115 10:43:05.168285   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:05.168343   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.173309   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.177144   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:05.177164   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:05.239116   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:05.239148   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:05.368712   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:05.368745   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:05.429504   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:05.429540   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:05.473181   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:05.473216   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.510948   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:05.510974   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.551052   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:05.551082   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.606711   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:05.606746   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:05.661634   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:05.661663   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:05.675627   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:05.675656   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:05.736266   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:05.736305   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.775567   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:05.775597   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:06.111495   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:06.111531   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:08.661238   46584 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:08.661275   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.661282   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.661288   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.661294   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.661300   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.661306   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.661316   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.661324   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.661335   46584 system_pods.go:74] duration metric: took 3.877343546s to wait for pod list to return data ...
	I0115 10:43:08.661342   46584 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:08.664367   46584 default_sa.go:45] found service account: "default"
	I0115 10:43:08.664393   46584 default_sa.go:55] duration metric: took 3.04125ms for default service account to be created ...
	I0115 10:43:08.664408   46584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:08.672827   46584 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:08.672852   46584 system_pods.go:89] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.672860   46584 system_pods.go:89] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.672867   46584 system_pods.go:89] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.672873   46584 system_pods.go:89] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.672879   46584 system_pods.go:89] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.672885   46584 system_pods.go:89] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.672895   46584 system_pods.go:89] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.672906   46584 system_pods.go:89] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.672920   46584 system_pods.go:126] duration metric: took 8.505614ms to wait for k8s-apps to be running ...
	I0115 10:43:08.672933   46584 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:08.672984   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:08.690592   46584 system_svc.go:56] duration metric: took 17.651896ms WaitForService to wait for kubelet.
	I0115 10:43:08.690618   46584 kubeadm.go:581] duration metric: took 4m25.678563679s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:08.690640   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:08.694652   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:08.694679   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:08.694692   46584 node_conditions.go:105] duration metric: took 4.045505ms to run NodePressure ...
	I0115 10:43:08.694705   46584 start.go:228] waiting for startup goroutines ...
	I0115 10:43:08.694713   46584 start.go:233] waiting for cluster config update ...
	I0115 10:43:08.694725   46584 start.go:242] writing updated cluster config ...
	I0115 10:43:08.694991   46584 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:08.747501   46584 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:08.750319   46584 out.go:177] * Done! kubectl is now configured to use "embed-certs-781270" cluster and "default" namespace by default
	I0115 10:43:04.686284   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:06.703127   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.180590   46387 pod_ready.go:81] duration metric: took 4m0.000776944s waiting for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:07.180624   46387 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0115 10:43:07.180644   46387 pod_ready.go:38] duration metric: took 4m1.198895448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:07.180669   46387 kubeadm.go:640] restartCluster took 5m11.875261334s
	W0115 10:43:07.180729   46387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0115 10:43:07.180765   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0115 10:43:05.479764   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.978536   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.343529   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841510   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841533   47063 pod_ready.go:81] duration metric: took 4m0.007868879s waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:09.841542   47063 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:09.841549   47063 pod_ready.go:38] duration metric: took 4m2.808610487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:09.841562   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:09.841584   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:09.841625   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:12.165729   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.984931075s)
	I0115 10:43:12.165790   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:12.178710   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:43:12.188911   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:43:12.199329   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:43:12.199377   46387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0115 10:43:12.411245   46387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 10:43:09.980448   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:12.478625   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:14.479234   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.904898   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:09.904921   47063 cri.go:89] found id: ""
	I0115 10:43:09.904930   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:09.904996   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.911493   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:09.911557   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:09.958040   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:09.958060   47063 cri.go:89] found id: ""
	I0115 10:43:09.958070   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:09.958122   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.962914   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:09.962972   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:10.033848   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:10.033875   47063 cri.go:89] found id: ""
	I0115 10:43:10.033885   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:10.033946   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.043173   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:10.043232   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:10.088380   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:10.088405   47063 cri.go:89] found id: ""
	I0115 10:43:10.088415   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:10.088478   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.094288   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:10.094350   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:10.145428   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:10.145453   47063 cri.go:89] found id: ""
	I0115 10:43:10.145463   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:10.145547   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.150557   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:10.150637   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:10.206875   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:10.206901   47063 cri.go:89] found id: ""
	I0115 10:43:10.206915   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:10.206971   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.211979   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:10.212039   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:10.260892   47063 cri.go:89] found id: ""
	I0115 10:43:10.260914   47063 logs.go:284] 0 containers: []
	W0115 10:43:10.260924   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:10.260936   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:10.260987   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:10.315938   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.315970   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:10.315978   47063 cri.go:89] found id: ""
	I0115 10:43:10.315987   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:10.316045   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.324077   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.332727   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:10.332756   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.376006   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:10.376034   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:10.967301   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:10.967337   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:11.033301   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:11.033327   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:11.091151   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:11.091184   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:11.145411   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:11.145447   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:11.194249   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:11.194274   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:11.373988   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:11.374020   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:11.442754   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:11.442788   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:11.486282   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:11.486315   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:11.547428   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:11.547464   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:11.560977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:11.561005   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:11.603150   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:11.603179   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.149324   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:14.166360   47063 api_server.go:72] duration metric: took 4m14.983478755s to wait for apiserver process to appear ...
	I0115 10:43:14.166391   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:14.166444   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:14.166504   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:14.211924   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:14.211950   47063 cri.go:89] found id: ""
	I0115 10:43:14.211961   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:14.212018   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.216288   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:14.216352   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:14.264237   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:14.264270   47063 cri.go:89] found id: ""
	I0115 10:43:14.264280   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:14.264338   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.268883   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:14.268947   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:14.329606   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:14.329631   47063 cri.go:89] found id: ""
	I0115 10:43:14.329639   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:14.329694   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.334069   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:14.334133   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:14.374753   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.374779   47063 cri.go:89] found id: ""
	I0115 10:43:14.374788   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:14.374842   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.380452   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:14.380529   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:14.422341   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:14.422371   47063 cri.go:89] found id: ""
	I0115 10:43:14.422380   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:14.422444   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.427106   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:14.427169   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:14.469410   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:14.469440   47063 cri.go:89] found id: ""
	I0115 10:43:14.469450   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:14.469511   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.475098   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:14.475216   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:14.533771   47063 cri.go:89] found id: ""
	I0115 10:43:14.533794   47063 logs.go:284] 0 containers: []
	W0115 10:43:14.533800   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:14.533805   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:14.533876   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:14.573458   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:14.573483   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:14.573490   47063 cri.go:89] found id: ""
	I0115 10:43:14.573498   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:14.573561   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.578186   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.583133   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:14.583157   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.631142   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:14.631180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:16.978406   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:18.979879   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:15.076904   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:15.076958   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:15.129739   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:15.129778   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:15.169656   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:15.169685   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:15.229569   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:15.229616   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:15.293037   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:15.293075   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:15.351198   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:15.351243   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:15.394604   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:15.394642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:15.451142   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:15.451180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:15.466108   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:15.466146   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:15.595576   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:15.595615   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:15.643711   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:15.643740   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.200861   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:43:18.207576   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:43:18.208943   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:18.208964   47063 api_server.go:131] duration metric: took 4.042566476s to wait for apiserver health ...
	I0115 10:43:18.208971   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:18.208992   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:18.209037   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:18.254324   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.254353   47063 cri.go:89] found id: ""
	I0115 10:43:18.254361   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:18.254405   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.258765   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:18.258844   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:18.303785   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.303811   47063 cri.go:89] found id: ""
	I0115 10:43:18.303820   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:18.303880   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.308940   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:18.309009   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:18.358850   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:18.358878   47063 cri.go:89] found id: ""
	I0115 10:43:18.358888   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:18.358954   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.363588   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:18.363656   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:18.412797   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.412820   47063 cri.go:89] found id: ""
	I0115 10:43:18.412828   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:18.412878   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.418704   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:18.418765   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:18.460050   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:18.460074   47063 cri.go:89] found id: ""
	I0115 10:43:18.460083   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:18.460138   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.465581   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:18.465642   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:18.516632   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.516656   47063 cri.go:89] found id: ""
	I0115 10:43:18.516665   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:18.516719   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.521873   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:18.521935   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:18.574117   47063 cri.go:89] found id: ""
	I0115 10:43:18.574145   47063 logs.go:284] 0 containers: []
	W0115 10:43:18.574154   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:18.574161   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:18.574222   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:18.630561   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.630593   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:18.630599   47063 cri.go:89] found id: ""
	I0115 10:43:18.630606   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:18.630666   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.636059   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.640707   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:18.640728   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.681635   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:18.681667   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:18.803880   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:18.803913   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.864605   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:18.864642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.918210   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:18.918250   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.960702   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:18.960733   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:19.013206   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:19.013242   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:19.070193   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:19.070230   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:19.087983   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:19.088023   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:19.150096   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:19.150132   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:19.196977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:19.197006   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:19.244166   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:19.244202   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:19.290314   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:19.290349   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:22.182766   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:22.182794   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.182801   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.182808   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.182814   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.182820   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.182826   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.182836   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.182848   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.182858   47063 system_pods.go:74] duration metric: took 3.973880704s to wait for pod list to return data ...
	I0115 10:43:22.182869   47063 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:22.186304   47063 default_sa.go:45] found service account: "default"
	I0115 10:43:22.186344   47063 default_sa.go:55] duration metric: took 3.464907ms for default service account to be created ...
	I0115 10:43:22.186354   47063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:22.192564   47063 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:22.192595   47063 system_pods.go:89] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.192604   47063 system_pods.go:89] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.192611   47063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.192620   47063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.192627   47063 system_pods.go:89] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.192634   47063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.192644   47063 system_pods.go:89] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.192651   47063 system_pods.go:89] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.192661   47063 system_pods.go:126] duration metric: took 6.301001ms to wait for k8s-apps to be running ...
	I0115 10:43:22.192669   47063 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:22.192720   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:22.210150   47063 system_svc.go:56] duration metric: took 17.476738ms WaitForService to wait for kubelet.
	I0115 10:43:22.210169   47063 kubeadm.go:581] duration metric: took 4m23.02729406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:22.210190   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:22.214086   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:22.214111   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:22.214124   47063 node_conditions.go:105] duration metric: took 3.928309ms to run NodePressure ...
	I0115 10:43:22.214137   47063 start.go:228] waiting for startup goroutines ...
	I0115 10:43:22.214146   47063 start.go:233] waiting for cluster config update ...
	I0115 10:43:22.214158   47063 start.go:242] writing updated cluster config ...
	I0115 10:43:22.214394   47063 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:22.264250   47063 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:22.267546   47063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-709012" cluster and "default" namespace by default
	I0115 10:43:20.980266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:23.478672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.109313   46387 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0115 10:43:26.109392   46387 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 10:43:26.109501   46387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 10:43:26.109621   46387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 10:43:26.109750   46387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 10:43:26.109926   46387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 10:43:26.110051   46387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 10:43:26.110114   46387 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0115 10:43:26.110201   46387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 10:43:26.112841   46387 out.go:204]   - Generating certificates and keys ...
	I0115 10:43:26.112937   46387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 10:43:26.113031   46387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 10:43:26.113142   46387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0115 10:43:26.113237   46387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0115 10:43:26.113336   46387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0115 10:43:26.113414   46387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0115 10:43:26.113530   46387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0115 10:43:26.113617   46387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0115 10:43:26.113717   46387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0115 10:43:26.113814   46387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0115 10:43:26.113867   46387 kubeadm.go:322] [certs] Using the existing "sa" key
	I0115 10:43:26.113959   46387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 10:43:26.114029   46387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 10:43:26.114128   46387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 10:43:26.114214   46387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 10:43:26.114289   46387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 10:43:26.114400   46387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 10:43:26.115987   46387 out.go:204]   - Booting up control plane ...
	I0115 10:43:26.116100   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 10:43:26.116240   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 10:43:26.116349   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 10:43:26.116476   46387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 10:43:26.116677   46387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 10:43:26.116792   46387 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.004579 seconds
	I0115 10:43:26.116908   46387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 10:43:26.117097   46387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 10:43:26.117187   46387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 10:43:26.117349   46387 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-206509 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0115 10:43:26.117437   46387 kubeadm.go:322] [bootstrap-token] Using token: zc1jed.g57dxx99f2u8lwfg
	I0115 10:43:26.118960   46387 out.go:204]   - Configuring RBAC rules ...
	I0115 10:43:26.119074   46387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 10:43:26.119258   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 10:43:26.119401   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 10:43:26.119538   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 10:43:26.119657   46387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 10:43:26.119723   46387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 10:43:26.119796   46387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 10:43:26.119809   46387 kubeadm.go:322] 
	I0115 10:43:26.119857   46387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 10:43:26.119863   46387 kubeadm.go:322] 
	I0115 10:43:26.119923   46387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 10:43:26.119930   46387 kubeadm.go:322] 
	I0115 10:43:26.119950   46387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 10:43:26.120002   46387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 10:43:26.120059   46387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 10:43:26.120078   46387 kubeadm.go:322] 
	I0115 10:43:26.120120   46387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 10:43:26.120185   46387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 10:43:26.120249   46387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 10:43:26.120255   46387 kubeadm.go:322] 
	I0115 10:43:26.120359   46387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0115 10:43:26.120426   46387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 10:43:26.120433   46387 kubeadm.go:322] 
	I0115 10:43:26.120512   46387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120660   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 \
	I0115 10:43:26.120687   46387 kubeadm.go:322]     --control-plane 	  
	I0115 10:43:26.120691   46387 kubeadm.go:322] 
	I0115 10:43:26.120757   46387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 10:43:26.120763   46387 kubeadm.go:322] 
	I0115 10:43:26.120831   46387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120969   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 10:43:26.120990   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:43:26.121000   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:43:26.122557   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:43:25.977703   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:27.979775   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.123754   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:43:26.133514   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:43:26.152666   46387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:43:26.152776   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.152794   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=old-k8s-version-206509 minikube.k8s.io/updated_at=2024_01_15T10_43_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.205859   46387 ops.go:34] apiserver oom_adj: -16
	I0115 10:43:26.398371   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.899064   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.398532   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.898380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.398986   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.899140   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.399224   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.898397   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.399321   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.899035   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.398549   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.898547   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.399096   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.898492   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.399077   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.899311   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:34.398839   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.980789   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:31.981727   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.479518   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.398611   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.898531   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.399422   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.898569   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.399432   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.399017   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.898561   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:39.398551   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.977916   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:38.978672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:39.899402   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.398556   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.898384   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:41.035213   46387 kubeadm.go:1088] duration metric: took 14.882479947s to wait for elevateKubeSystemPrivileges.
	I0115 10:43:41.035251   46387 kubeadm.go:406] StartCluster complete in 5m45.791159963s
	I0115 10:43:41.035271   46387 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.035357   46387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:43:41.037947   46387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.038220   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:43:41.038242   46387 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:43:41.038314   46387 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038317   46387 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038333   46387 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-206509"
	I0115 10:43:41.038334   46387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-206509"
	W0115 10:43:41.038341   46387 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:43:41.038389   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038388   46387 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038405   46387 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-206509"
	W0115 10:43:41.038428   46387 addons.go:243] addon metrics-server should already be in state true
	I0115 10:43:41.038446   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:43:41.038467   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038724   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038738   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038783   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038787   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038815   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038909   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.054942   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0115 10:43:41.055314   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.055844   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.055868   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.056312   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.056464   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0115 10:43:41.056853   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.056878   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.056910   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.057198   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0115 10:43:41.057317   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057341   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.057532   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.057682   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.057844   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.057955   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057979   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.058300   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.058921   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.058952   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.061947   46387 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-206509"
	W0115 10:43:41.061973   46387 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:43:41.061999   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.062381   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.062405   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.075135   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33773
	I0115 10:43:41.075593   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.075704   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0115 10:43:41.076514   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.076536   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.076723   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.077196   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.077219   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.077225   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077564   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077607   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.077723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.080161   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.080238   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.082210   46387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:43:41.083883   46387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:43:41.085452   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:43:41.085477   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:43:41.083855   46387 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.085496   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.085496   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:43:41.085511   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.086304   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0115 10:43:41.086675   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.087100   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.087120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.087465   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.087970   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.088011   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.090492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.091743   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092335   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092355   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092675   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092695   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092833   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.092969   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.093129   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.093233   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.094042   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.094209   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.094296   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.094372   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.105226   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0115 10:43:41.105644   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.106092   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.106120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.106545   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.106759   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.108735   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.109022   46387 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.109040   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:43:41.109057   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.112322   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112771   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.112797   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112914   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.113100   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.113279   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.113442   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.353016   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:43:41.353038   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:43:41.357846   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.365469   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.465358   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:43:41.465379   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:43:41.532584   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:41.532612   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:43:41.598528   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 10:43:41.605798   46387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-206509" context rescaled to 1 replicas
	I0115 10:43:41.605838   46387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:43:41.607901   46387 out.go:177] * Verifying Kubernetes components...
	I0115 10:43:41.609363   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:41.608778   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:42.634034   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268517129s)
	I0115 10:43:42.634071   46387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.024689682s)
	I0115 10:43:42.634090   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634095   46387 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.634103   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634046   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.035489058s)
	I0115 10:43:42.634140   46387 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0115 10:43:42.634200   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.276326924s)
	I0115 10:43:42.634228   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634243   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634451   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634495   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634515   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634525   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634534   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634540   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634557   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634570   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634580   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634589   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634896   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634912   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634967   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634997   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.635008   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.656600   46387 node_ready.go:49] node "old-k8s-version-206509" has status "Ready":"True"
	I0115 10:43:42.656629   46387 node_ready.go:38] duration metric: took 22.522223ms waiting for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.656640   46387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:42.714802   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.714834   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.715273   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.715277   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.715303   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.722261   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:42.792908   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183451396s)
	I0115 10:43:42.792964   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.792982   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793316   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793339   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793352   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.793361   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793580   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793625   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793638   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793649   46387 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-206509"
	I0115 10:43:42.796113   46387 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:43:42.798128   46387 addons.go:505] enable addons completed in 1.759885904s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:43:40.979360   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477862   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477895   46388 pod_ready.go:81] duration metric: took 4m0.006840717s waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:43.477906   46388 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:43.477915   46388 pod_ready.go:38] duration metric: took 4m3.414382685s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:43.477933   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:43.477963   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:43.478033   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:43.533796   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:43.533825   46388 cri.go:89] found id: ""
	I0115 10:43:43.533836   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:43.533893   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.540165   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:43.540224   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:43.576831   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:43.576853   46388 cri.go:89] found id: ""
	I0115 10:43:43.576861   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:43.576922   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.581556   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:43.581616   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:43.625292   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.625315   46388 cri.go:89] found id: ""
	I0115 10:43:43.625323   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:43.625371   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.630741   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:43.630803   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:43.682511   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:43.682553   46388 cri.go:89] found id: ""
	I0115 10:43:43.682563   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:43.682621   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.688126   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:43.688194   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:43.739847   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.739866   46388 cri.go:89] found id: ""
	I0115 10:43:43.739873   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:43.739919   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.744569   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:43.744635   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:43.787603   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:43.787627   46388 cri.go:89] found id: ""
	I0115 10:43:43.787635   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:43.787676   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.792209   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:43.792271   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:43.838530   46388 cri.go:89] found id: ""
	I0115 10:43:43.838557   46388 logs.go:284] 0 containers: []
	W0115 10:43:43.838568   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:43.838576   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:43.838636   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:43.885727   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:43.885755   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:43.885761   46388 cri.go:89] found id: ""
	I0115 10:43:43.885769   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:43.885822   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.891036   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.895462   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:43.895493   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.939544   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:43.939568   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.985944   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:43.985973   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:44.052893   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:44.052923   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:44.116539   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:44.116569   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:44.173390   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:44.173432   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:44.194269   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:44.194295   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:44.239908   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:44.239935   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:44.729495   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:46.231080   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:46.231100   46387 pod_ready.go:81] duration metric: took 3.50881186s waiting for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:46.231109   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:48.239378   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:44.737413   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:44.737445   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:44.891846   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:44.891875   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:44.951418   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:44.951453   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:45.000171   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:45.000201   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:45.041629   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:45.041657   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.586439   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:47.602078   46388 api_server.go:72] duration metric: took 4m14.792413378s to wait for apiserver process to appear ...
	I0115 10:43:47.602102   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:47.602138   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:47.602193   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:47.646259   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:47.646283   46388 cri.go:89] found id: ""
	I0115 10:43:47.646291   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:47.646346   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.650757   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:47.650830   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:47.691688   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.691715   46388 cri.go:89] found id: ""
	I0115 10:43:47.691724   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:47.691777   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.696380   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:47.696467   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:47.738315   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:47.738340   46388 cri.go:89] found id: ""
	I0115 10:43:47.738349   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:47.738402   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.742810   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:47.742870   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:47.783082   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:47.783114   46388 cri.go:89] found id: ""
	I0115 10:43:47.783124   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:47.783178   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.787381   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:47.787432   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:47.832325   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:47.832353   46388 cri.go:89] found id: ""
	I0115 10:43:47.832363   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:47.832420   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.836957   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:47.837014   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:47.877146   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:47.877169   46388 cri.go:89] found id: ""
	I0115 10:43:47.877178   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:47.877231   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.881734   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:47.881782   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:47.921139   46388 cri.go:89] found id: ""
	I0115 10:43:47.921169   46388 logs.go:284] 0 containers: []
	W0115 10:43:47.921180   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:47.921188   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:47.921236   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:47.959829   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:47.959857   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:47.959864   46388 cri.go:89] found id: ""
	I0115 10:43:47.959872   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:47.959924   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.964105   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.968040   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:47.968059   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:48.017234   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:48.017266   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:48.073552   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:48.073583   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:48.512500   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:48.512539   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:48.564545   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:48.564578   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:48.609739   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:48.609768   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:48.654076   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:48.654106   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:48.691287   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:48.691314   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:48.739023   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:48.739063   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:48.791976   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:48.792018   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:48.808633   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:48.808659   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:48.933063   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:48.933099   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:48.974794   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:48.974825   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:49.735197   46387 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735227   46387 pod_ready.go:81] duration metric: took 3.504112323s waiting for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:49.735237   46387 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735243   46387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740497   46387 pod_ready.go:92] pod "kube-proxy-lh96p" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:49.740515   46387 pod_ready.go:81] duration metric: took 5.267229ms waiting for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740525   46387 pod_ready.go:38] duration metric: took 7.083874855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:49.740537   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:49.740580   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:49.755697   46387 api_server.go:72] duration metric: took 8.149828702s to wait for apiserver process to appear ...
	I0115 10:43:49.755718   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:49.755731   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:43:49.762148   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:43:49.762995   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:43:49.763013   46387 api_server.go:131] duration metric: took 7.290279ms to wait for apiserver health ...
	I0115 10:43:49.763019   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:49.766597   46387 system_pods.go:59] 4 kube-system pods found
	I0115 10:43:49.766615   46387 system_pods.go:61] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.766620   46387 system_pods.go:61] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.766626   46387 system_pods.go:61] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.766631   46387 system_pods.go:61] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.766637   46387 system_pods.go:74] duration metric: took 3.613036ms to wait for pod list to return data ...
	I0115 10:43:49.766642   46387 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:49.768826   46387 default_sa.go:45] found service account: "default"
	I0115 10:43:49.768844   46387 default_sa.go:55] duration metric: took 2.197235ms for default service account to be created ...
	I0115 10:43:49.768850   46387 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:49.772271   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:49.772296   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.772304   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.772314   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.772321   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.772339   46387 retry.go:31] will retry after 223.439669ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.001140   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.001165   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.001170   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.001176   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.001181   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.001198   46387 retry.go:31] will retry after 329.400473ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.335362   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.335386   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.335391   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.335398   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.335403   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.335420   46387 retry.go:31] will retry after 466.919302ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.806617   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.806643   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.806649   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.806655   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.806660   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.806678   46387 retry.go:31] will retry after 596.303035ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.407231   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:51.407257   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:51.407264   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:51.407271   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:51.407275   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:51.407292   46387 retry.go:31] will retry after 688.903723ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.102330   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.102357   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.102364   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.102374   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.102382   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.102399   46387 retry.go:31] will retry after 817.783297ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.925586   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.925612   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.925620   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.925629   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.925636   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.925658   46387 retry.go:31] will retry after 797.004884ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:53.728788   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:53.728812   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:53.728817   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:53.728823   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:53.728827   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:53.728843   46387 retry.go:31] will retry after 1.021568746s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.528236   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:43:51.533236   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:43:51.534697   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:43:51.534714   46388 api_server.go:131] duration metric: took 3.932606059s to wait for apiserver health ...
	I0115 10:43:51.534721   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:51.534744   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:51.534796   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:51.571704   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.571730   46388 cri.go:89] found id: ""
	I0115 10:43:51.571740   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:51.571793   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.576140   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:51.576201   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:51.614720   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:51.614803   46388 cri.go:89] found id: ""
	I0115 10:43:51.614823   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:51.614909   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.620904   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:51.620966   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:51.659679   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.659711   46388 cri.go:89] found id: ""
	I0115 10:43:51.659721   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:51.659779   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.664223   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:51.664275   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:51.701827   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:51.701850   46388 cri.go:89] found id: ""
	I0115 10:43:51.701858   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:51.701915   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.707296   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:51.707354   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:51.745962   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:51.745989   46388 cri.go:89] found id: ""
	I0115 10:43:51.746006   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:51.746061   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.750872   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:51.750942   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:51.796600   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:51.796637   46388 cri.go:89] found id: ""
	I0115 10:43:51.796647   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:51.796697   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.801250   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:51.801321   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:51.845050   46388 cri.go:89] found id: ""
	I0115 10:43:51.845072   46388 logs.go:284] 0 containers: []
	W0115 10:43:51.845081   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:51.845087   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:51.845144   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:51.880907   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:51.880935   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:51.880942   46388 cri.go:89] found id: ""
	I0115 10:43:51.880951   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:51.880997   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.885202   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.889086   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:51.889108   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.939740   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:51.939770   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.977039   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:51.977068   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:52.024927   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:52.024960   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:52.071850   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:52.071882   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:52.123313   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:52.123343   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:52.137274   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:52.137297   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:52.260488   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:52.260525   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:52.301121   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:52.301156   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:52.346323   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:52.346349   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:52.402759   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:52.402788   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:52.457075   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:52.457103   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:52.811321   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:52.811359   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:55.374293   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:55.374327   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.374335   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.374342   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.374348   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.374354   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.374361   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.374371   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.374382   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.374394   46388 system_pods.go:74] duration metric: took 3.83966542s to wait for pod list to return data ...
	I0115 10:43:55.374407   46388 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:55.376812   46388 default_sa.go:45] found service account: "default"
	I0115 10:43:55.376833   46388 default_sa.go:55] duration metric: took 2.418755ms for default service account to be created ...
	I0115 10:43:55.376843   46388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:55.383202   46388 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:55.383227   46388 system_pods.go:89] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.383236   46388 system_pods.go:89] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.383244   46388 system_pods.go:89] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.383285   46388 system_pods.go:89] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.383297   46388 system_pods.go:89] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.383303   46388 system_pods.go:89] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.383314   46388 system_pods.go:89] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.383325   46388 system_pods.go:89] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.383338   46388 system_pods.go:126] duration metric: took 6.489813ms to wait for k8s-apps to be running ...
	I0115 10:43:55.383349   46388 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:55.383401   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:55.399074   46388 system_svc.go:56] duration metric: took 15.719638ms WaitForService to wait for kubelet.
	I0115 10:43:55.399096   46388 kubeadm.go:581] duration metric: took 4m22.589439448s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:55.399118   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:55.403855   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:55.403883   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:55.403896   46388 node_conditions.go:105] duration metric: took 4.771651ms to run NodePressure ...
	I0115 10:43:55.403908   46388 start.go:228] waiting for startup goroutines ...
	I0115 10:43:55.403917   46388 start.go:233] waiting for cluster config update ...
	I0115 10:43:55.403930   46388 start.go:242] writing updated cluster config ...
	I0115 10:43:55.404244   46388 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:55.453146   46388 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0115 10:43:55.455321   46388 out.go:177] * Done! kubectl is now configured to use "no-preload-824502" cluster and "default" namespace by default
	I0115 10:43:54.756077   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:54.756099   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:54.756104   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:54.756111   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:54.756116   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:54.756131   46387 retry.go:31] will retry after 1.152306172s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:55.913769   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:55.913792   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:55.913798   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:55.913804   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.913810   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:55.913826   46387 retry.go:31] will retry after 2.261296506s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:58.179679   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:58.179704   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:58.179710   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:58.179718   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:58.179722   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:58.179739   46387 retry.go:31] will retry after 2.012023518s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:00.197441   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:00.197471   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:00.197476   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:00.197483   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:00.197487   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:00.197505   46387 retry.go:31] will retry after 3.341619522s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:03.543730   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:03.543752   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:03.543757   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:03.543766   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:03.543771   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:03.543788   46387 retry.go:31] will retry after 2.782711895s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:06.332250   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:06.332276   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:06.332281   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:06.332288   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:06.332294   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:06.332310   46387 retry.go:31] will retry after 5.379935092s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:11.718269   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:11.718315   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:11.718324   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:11.718334   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:11.718343   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:11.718364   46387 retry.go:31] will retry after 6.238812519s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:17.963126   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:17.963150   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:17.963155   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:17.963162   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:17.963167   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:17.963183   46387 retry.go:31] will retry after 7.774120416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:25.743164   46387 system_pods.go:86] 6 kube-system pods found
	I0115 10:44:25.743190   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:25.743196   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:25.743200   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:25.743204   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:25.743210   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:25.743214   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:25.743231   46387 retry.go:31] will retry after 8.584433466s: missing components: kube-apiserver, kube-scheduler
	I0115 10:44:34.335720   46387 system_pods.go:86] 7 kube-system pods found
	I0115 10:44:34.335751   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:34.335759   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:34.335777   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:34.335785   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:34.335793   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:34.335801   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:34.335815   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:34.335834   46387 retry.go:31] will retry after 13.073630932s: missing components: kube-apiserver
	I0115 10:44:47.415277   46387 system_pods.go:86] 8 kube-system pods found
	I0115 10:44:47.415304   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:47.415311   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:47.415318   46387 system_pods.go:89] "kube-apiserver-old-k8s-version-206509" [e708ba3e-5deb-4b60-ab5b-52c4d671fa46] Running
	I0115 10:44:47.415326   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:47.415332   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:47.415339   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:47.415349   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:47.415355   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:47.415371   46387 system_pods.go:126] duration metric: took 57.64651504s to wait for k8s-apps to be running ...
	I0115 10:44:47.415382   46387 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:44:47.415444   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:44:47.433128   46387 system_svc.go:56] duration metric: took 17.740925ms WaitForService to wait for kubelet.
	I0115 10:44:47.433150   46387 kubeadm.go:581] duration metric: took 1m5.827285253s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:44:47.433174   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:44:47.435664   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:44:47.435685   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:44:47.435695   46387 node_conditions.go:105] duration metric: took 2.516113ms to run NodePressure ...
	I0115 10:44:47.435708   46387 start.go:228] waiting for startup goroutines ...
	I0115 10:44:47.435716   46387 start.go:233] waiting for cluster config update ...
	I0115 10:44:47.435728   46387 start.go:242] writing updated cluster config ...
	I0115 10:44:47.436091   46387 ssh_runner.go:195] Run: rm -f paused
	I0115 10:44:47.492053   46387 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0115 10:44:47.494269   46387 out.go:177] 
	W0115 10:44:47.495828   46387 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0115 10:44:47.497453   46387 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0115 10:44:47.498880   46387 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-206509" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 10:37:59 UTC, ends at Mon 2024-01-15 10:52:10 UTC. --
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.553554007Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:12dc086e474cc7bdd264ff9f4e6ee8bc99035b389ca5fda26cdd09f503e4a9a3,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-n59ft,Uid:34777797-e585-42b7-852f-87d8bf442f6f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315133370283960,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-n59ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34777797-e585-42b7-852f-87d8bf442f6f,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:38:37.433457477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f468dc0274416a9c9c141d05c0ad72abc912319290a78d1ce8b1fd2cc861c4ba,Metadata:&PodSandboxMetadata{Name:busybox,Uid:453842a7-e912-4899-86dc-3ed65feee9c7,Namespace:default,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1705315133343916834,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 453842a7-e912-4899-86dc-3ed65feee9c7,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:38:37.433456285Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ca7a24d606e6c5a76c900e2afe73d52243450fdfa0ee4bb3859acbe428194b0,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-wxclh,Uid:2a52a963-a5dd-4ead-8da3-0d502c2c96ed,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315125552637891,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-wxclh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a52a963-a5dd-4ead-8da3-0d502c2c96ed,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:38:37.
433449077Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ffc8ca836544dd400f2e9c808bd3400d1e8ea3e1015ef9aeb3f576e821598c9a,Metadata:&PodSandboxMetadata{Name:kube-proxy-jqgfc,Uid:a0df28b2-1ce0-40c7-b9aa-d56862f39034,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315120049452207,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jqgfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0df28b2-1ce0-40c7-b9aa-d56862f39034,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:38:37.433459479Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f13c7475-31d6-4aec-9905-070fafc63afa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315120044426706,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-01-15T10:38:37.433454808Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d78f10957e24ace27ba093446569949023f780ce4e009bcf21ab7d025b6988c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-781270,Uid:5e39ee0e9b9e2b796514e8d1d0e7ee69,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315110012538220,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39ee0e9b9e2b796514e8d1d0e7ee69,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.222:8443,kubernetes.io/config.hash: 5e39ee0e9b9e2b796514e8d1d0e7ee69,kubernetes.io/config.seen: 2024-01-15T10:38:29.424379772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8d6fe96efdec776643f19f0c468e12f3d46c3efa3e876b64746770db744460c5,Metadata:&PodSandboxMetadat
a{Name:kube-scheduler-embed-certs-781270,Uid:5b9cdca2e0cfac5bd845b568e4f9f745,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315110008163054,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9cdca2e0cfac5bd845b568e4f9f745,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5b9cdca2e0cfac5bd845b568e4f9f745,kubernetes.io/config.seen: 2024-01-15T10:38:29.424376393Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:76493771191cf8992f0b97c6651dbe178258ea1ce966791003e445452063c855,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-781270,Uid:19d87abe6210b88acc403e1bfc13d69c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315109977196801,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ku
be-controller-manager-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19d87abe6210b88acc403e1bfc13d69c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 19d87abe6210b88acc403e1bfc13d69c,kubernetes.io/config.seen: 2024-01-15T10:38:29.424363777Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:01d7ef7398d831b8e22ef1914aefac91e95636c3a7ae965802725189bfe5b8d4,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-781270,Uid:c7f255ced3c8832b5eaf0bd0066f2df6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315109972917997,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f255ced3c8832b5eaf0bd0066f2df6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.222:2379,kubernetes.io/config.hash: c7f255ced3c8832b5eaf0bd006
6f2df6,kubernetes.io/config.seen: 2024-01-15T10:38:29.424378503Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=9ad14a90-c66b-41ed-82c5-34fec1ad4ed2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.554456857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=102771fa-ca01-4a03-9d1f-60bdb01e64f6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.554509398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=102771fa-ca01-4a03-9d1f-60bdb01e64f6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.554733497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315151754453381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d451182513357c1e4bfbc80d5edfadf8f0ccc7ec2887fba2a9baa58db9764409,PodSandboxId:f468dc0274416a9c9c141d05c0ad72abc912319290a78d1ce8b1fd2cc861c4ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315135630539318,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 453842a7-e912-4899-86dc-3ed65feee9c7,},Annotations:map[string]string{io.kubernetes.container.hash: c0713515,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2,PodSandboxId:12dc086e474cc7bdd264ff9f4e6ee8bc99035b389ca5fda26cdd09f503e4a9a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315134373353339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n59ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34777797-e585-42b7-852f-87d8bf442f6f,},Annotations:map[string]string{io.kubernetes.container.hash: a6e06c22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315121023115053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f,PodSandboxId:ffc8ca836544dd400f2e9c808bd3400d1e8ea3e1015ef9aeb3f576e821598c9a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315120922206189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqgfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0df28b2-1c
e0-40c7-b9aa-d56862f39034,},Annotations:map[string]string{io.kubernetes.container.hash: b71c5d12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b,PodSandboxId:8d6fe96efdec776643f19f0c468e12f3d46c3efa3e876b64746770db744460c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315111037211441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9cdca2e0cfac5bd
845b568e4f9f745,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5,PodSandboxId:01d7ef7398d831b8e22ef1914aefac91e95636c3a7ae965802725189bfe5b8d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315110859437640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f255ced3c8832b5eaf0bd0066f2df6,},Annotations:map[string]string{io
.kubernetes.container.hash: 48d16d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d,PodSandboxId:2d78f10957e24ace27ba093446569949023f780ce4e009bcf21ab7d025b6988c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315110727849375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39ee0e9b9e2b796514e8d1d0e7ee69,},Annotations:map[string]string{io.kubernete
s.container.hash: ce765492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc,PodSandboxId:76493771191cf8992f0b97c6651dbe178258ea1ce966791003e445452063c855,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315110683362842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19d87abe6210b88acc403e1bfc13d69c,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=102771fa-ca01-4a03-9d1f-60bdb01e64f6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.566935927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=10ab8566-1a0b-4699-97cd-e10c4122ffa3 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.567064174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=10ab8566-1a0b-4699-97cd-e10c4122ffa3 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.567897404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ad798803-d402-4645-9a9d-333b61b82c6e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.568408148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705315930568386800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ad798803-d402-4645-9a9d-333b61b82c6e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.568928561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=54fe46f9-9697-45ea-89c1-7ff1b245474a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.568969836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=54fe46f9-9697-45ea-89c1-7ff1b245474a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.569288272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315151754453381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d451182513357c1e4bfbc80d5edfadf8f0ccc7ec2887fba2a9baa58db9764409,PodSandboxId:f468dc0274416a9c9c141d05c0ad72abc912319290a78d1ce8b1fd2cc861c4ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315135630539318,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 453842a7-e912-4899-86dc-3ed65feee9c7,},Annotations:map[string]string{io.kubernetes.container.hash: c0713515,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2,PodSandboxId:12dc086e474cc7bdd264ff9f4e6ee8bc99035b389ca5fda26cdd09f503e4a9a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315134373353339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n59ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34777797-e585-42b7-852f-87d8bf442f6f,},Annotations:map[string]string{io.kubernetes.container.hash: a6e06c22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315121023115053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f,PodSandboxId:ffc8ca836544dd400f2e9c808bd3400d1e8ea3e1015ef9aeb3f576e821598c9a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315120922206189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqgfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0df28b2-1c
e0-40c7-b9aa-d56862f39034,},Annotations:map[string]string{io.kubernetes.container.hash: b71c5d12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b,PodSandboxId:8d6fe96efdec776643f19f0c468e12f3d46c3efa3e876b64746770db744460c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315111037211441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9cdca2e0cfac5bd
845b568e4f9f745,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5,PodSandboxId:01d7ef7398d831b8e22ef1914aefac91e95636c3a7ae965802725189bfe5b8d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315110859437640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f255ced3c8832b5eaf0bd0066f2df6,},Annotations:map[string]string{io
.kubernetes.container.hash: 48d16d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d,PodSandboxId:2d78f10957e24ace27ba093446569949023f780ce4e009bcf21ab7d025b6988c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315110727849375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39ee0e9b9e2b796514e8d1d0e7ee69,},Annotations:map[string]string{io.kubernete
s.container.hash: ce765492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc,PodSandboxId:76493771191cf8992f0b97c6651dbe178258ea1ce966791003e445452063c855,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315110683362842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19d87abe6210b88acc403e1bfc13d69c,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=54fe46f9-9697-45ea-89c1-7ff1b245474a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.607905591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8e4bd445-e991-462a-9fbc-1c715882c540 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.607962271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8e4bd445-e991-462a-9fbc-1c715882c540 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.609546054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9ace4201-40ac-476f-a1ab-770b6786c985 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.609908766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705315930609898002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9ace4201-40ac-476f-a1ab-770b6786c985 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.610460978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f081045e-38ae-4ad2-80a5-a2c8d3196572 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.610504841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f081045e-38ae-4ad2-80a5-a2c8d3196572 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.610689990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315151754453381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d451182513357c1e4bfbc80d5edfadf8f0ccc7ec2887fba2a9baa58db9764409,PodSandboxId:f468dc0274416a9c9c141d05c0ad72abc912319290a78d1ce8b1fd2cc861c4ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315135630539318,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 453842a7-e912-4899-86dc-3ed65feee9c7,},Annotations:map[string]string{io.kubernetes.container.hash: c0713515,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2,PodSandboxId:12dc086e474cc7bdd264ff9f4e6ee8bc99035b389ca5fda26cdd09f503e4a9a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315134373353339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n59ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34777797-e585-42b7-852f-87d8bf442f6f,},Annotations:map[string]string{io.kubernetes.container.hash: a6e06c22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315121023115053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f,PodSandboxId:ffc8ca836544dd400f2e9c808bd3400d1e8ea3e1015ef9aeb3f576e821598c9a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315120922206189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqgfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0df28b2-1c
e0-40c7-b9aa-d56862f39034,},Annotations:map[string]string{io.kubernetes.container.hash: b71c5d12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b,PodSandboxId:8d6fe96efdec776643f19f0c468e12f3d46c3efa3e876b64746770db744460c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315111037211441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9cdca2e0cfac5bd
845b568e4f9f745,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5,PodSandboxId:01d7ef7398d831b8e22ef1914aefac91e95636c3a7ae965802725189bfe5b8d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315110859437640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f255ced3c8832b5eaf0bd0066f2df6,},Annotations:map[string]string{io
.kubernetes.container.hash: 48d16d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d,PodSandboxId:2d78f10957e24ace27ba093446569949023f780ce4e009bcf21ab7d025b6988c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315110727849375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39ee0e9b9e2b796514e8d1d0e7ee69,},Annotations:map[string]string{io.kubernete
s.container.hash: ce765492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc,PodSandboxId:76493771191cf8992f0b97c6651dbe178258ea1ce966791003e445452063c855,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315110683362842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19d87abe6210b88acc403e1bfc13d69c,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f081045e-38ae-4ad2-80a5-a2c8d3196572 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.647137718Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b55f5c34-1767-49c6-9280-7b2f7a1cbb69 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.647191221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b55f5c34-1767-49c6-9280-7b2f7a1cbb69 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.648215549Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1a592dbb-ac46-436f-8643-37b1f429b566 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.648566430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705315930648554688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1a592dbb-ac46-436f-8643-37b1f429b566 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.649330324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=acc8d0f5-bcc9-40f0-aa6e-9f6587017d1d name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.649376595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=acc8d0f5-bcc9-40f0-aa6e-9f6587017d1d name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:10 embed-certs-781270 crio[727]: time="2024-01-15 10:52:10.649576029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315151754453381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d451182513357c1e4bfbc80d5edfadf8f0ccc7ec2887fba2a9baa58db9764409,PodSandboxId:f468dc0274416a9c9c141d05c0ad72abc912319290a78d1ce8b1fd2cc861c4ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315135630539318,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 453842a7-e912-4899-86dc-3ed65feee9c7,},Annotations:map[string]string{io.kubernetes.container.hash: c0713515,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2,PodSandboxId:12dc086e474cc7bdd264ff9f4e6ee8bc99035b389ca5fda26cdd09f503e4a9a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315134373353339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n59ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34777797-e585-42b7-852f-87d8bf442f6f,},Annotations:map[string]string{io.kubernetes.container.hash: a6e06c22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315121023115053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f,PodSandboxId:ffc8ca836544dd400f2e9c808bd3400d1e8ea3e1015ef9aeb3f576e821598c9a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315120922206189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqgfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0df28b2-1c
e0-40c7-b9aa-d56862f39034,},Annotations:map[string]string{io.kubernetes.container.hash: b71c5d12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b,PodSandboxId:8d6fe96efdec776643f19f0c468e12f3d46c3efa3e876b64746770db744460c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315111037211441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9cdca2e0cfac5bd
845b568e4f9f745,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5,PodSandboxId:01d7ef7398d831b8e22ef1914aefac91e95636c3a7ae965802725189bfe5b8d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315110859437640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f255ced3c8832b5eaf0bd0066f2df6,},Annotations:map[string]string{io
.kubernetes.container.hash: 48d16d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d,PodSandboxId:2d78f10957e24ace27ba093446569949023f780ce4e009bcf21ab7d025b6988c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315110727849375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39ee0e9b9e2b796514e8d1d0e7ee69,},Annotations:map[string]string{io.kubernete
s.container.hash: ce765492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc,PodSandboxId:76493771191cf8992f0b97c6651dbe178258ea1ce966791003e445452063c855,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315110683362842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19d87abe6210b88acc403e1bfc13d69c,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=acc8d0f5-bcc9-40f0-aa6e-9f6587017d1d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	111601a6dd351       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   39234a6ce3622       storage-provisioner
	d451182513357       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   f468dc0274416       busybox
	36c0765390486       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   12dc086e474cc       coredns-5dd5756b68-n59ft
	6abb26467c971       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   39234a6ce3622       storage-provisioner
	6f792de826409       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   ffc8ca836544d       kube-proxy-jqgfc
	fd8643f05eca8       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   8d6fe96efdec7       kube-scheduler-embed-certs-781270
	30a66dab34a57       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   01d7ef7398d83       etcd-embed-certs-781270
	4dcae24d7ff7b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   2d78f10957e24       kube-apiserver-embed-certs-781270
	4095240514ca1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   76493771191cf       kube-controller-manager-embed-certs-781270
	
	
	==> coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51118 - 8823 "HINFO IN 3301450306179273962.8606541448989940442. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037830463s
	
	
	==> describe nodes <==
	Name:               embed-certs-781270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-781270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=embed-certs-781270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T10_29_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:29:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-781270
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 10:52:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:49:19 +0000   Mon, 15 Jan 2024 10:29:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:49:19 +0000   Mon, 15 Jan 2024 10:29:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:49:19 +0000   Mon, 15 Jan 2024 10:29:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:49:19 +0000   Mon, 15 Jan 2024 10:38:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.222
	  Hostname:    embed-certs-781270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f39339401eb64c2ab4869bf492441844
	  System UUID:                f3933940-1eb6-4c2a-b486-9bf492441844
	  Boot ID:                    4f91d199-0378-4e0d-9609-e343b27e2bad
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-n59ft                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-781270                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-781270             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-781270    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-jqgfc                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-781270             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-wxclh               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-781270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-781270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-781270 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-781270 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-781270 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                22m                kubelet          Node embed-certs-781270 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-781270 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-781270 event: Registered Node embed-certs-781270 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-781270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-781270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-781270 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-781270 event: Registered Node embed-certs-781270 in Controller
	
	
	==> dmesg <==
	[Jan15 10:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071460] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.577695] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.504945] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149130] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan15 10:38] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.481016] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.105353] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.158668] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.105586] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.237391] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +17.942102] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +22.122069] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] <==
	{"level":"info","ts":"2024-01-15T10:38:39.836732Z","caller":"traceutil/trace.go:171","msg":"trace[810213199] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"978.082334ms","start":"2024-01-15T10:38:38.858626Z","end":"2024-01-15T10:38:39.836708Z","steps":["trace[810213199] 'process raft request'  (duration: 919.341046ms)","trace[810213199] 'compare'  (duration: 56.544318ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:38:39.836891Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:38:38.858611Z","time spent":"978.21208ms","remote":"127.0.0.1:50920","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6336,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-781270\" mod_revision:322 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-781270\" value_size:6259 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-781270\" > >"}
	{"level":"info","ts":"2024-01-15T10:38:39.837238Z","caller":"traceutil/trace.go:171","msg":"trace[688178221] linearizableReadLoop","detail":"{readStateIndex:599; appliedIndex:597; }","duration":"981.606633ms","start":"2024-01-15T10:38:38.855618Z","end":"2024-01-15T10:38:39.837224Z","steps":["trace[688178221] 'read index received'  (duration: 504.755371ms)","trace[688178221] 'applied index is now lower than readState.Index'  (duration: 476.849391ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:38:39.837374Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"981.761338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" ","response":"range_response_count:1 size:1930"}
	{"level":"info","ts":"2024-01-15T10:38:39.837402Z","caller":"traceutil/trace.go:171","msg":"trace[873584659] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:570; }","duration":"981.798236ms","start":"2024-01-15T10:38:38.855597Z","end":"2024-01-15T10:38:39.837395Z","steps":["trace[873584659] 'agreement among raft nodes before linearized reading'  (duration: 981.705941ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:39.837436Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:38:38.855588Z","time spent":"981.841465ms","remote":"127.0.0.1:50956","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":1953,"request content":"key:\"/registry/clusterroles/system:aggregate-to-view\" "}
	{"level":"warn","ts":"2024-01-15T10:38:39.841378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"948.018724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:38:39.841418Z","caller":"traceutil/trace.go:171","msg":"trace[574516842] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:571; }","duration":"948.066743ms","start":"2024-01-15T10:38:38.893341Z","end":"2024-01-15T10:38:39.841408Z","steps":["trace[574516842] 'agreement among raft nodes before linearized reading'  (duration: 947.939884ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:39.841447Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:38:38.893328Z","time spent":"948.109618ms","remote":"127.0.0.1:50872","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-01-15T10:38:39.841692Z","caller":"traceutil/trace.go:171","msg":"trace[1441359675] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"278.864064ms","start":"2024-01-15T10:38:39.562819Z","end":"2024-01-15T10:38:39.841683Z","steps":["trace[1441359675] 'process raft request'  (duration: 278.347753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:39.841893Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"914.361004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:38:39.84192Z","caller":"traceutil/trace.go:171","msg":"trace[1144079903] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:571; }","duration":"914.39148ms","start":"2024-01-15T10:38:38.92752Z","end":"2024-01-15T10:38:39.841912Z","steps":["trace[1144079903] 'agreement among raft nodes before linearized reading'  (duration: 914.339783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:39.84194Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:38:38.927506Z","time spent":"914.429656ms","remote":"127.0.0.1:50874","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-01-15T10:38:40.033941Z","caller":"traceutil/trace.go:171","msg":"trace[19813570] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"181.015704ms","start":"2024-01-15T10:38:39.852904Z","end":"2024-01-15T10:38:40.03392Z","steps":["trace[19813570] 'process raft request'  (duration: 178.341384ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:40.034593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.965619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-edit\" ","response":"range_response_count:1 size:2025"}
	{"level":"info","ts":"2024-01-15T10:38:40.034682Z","caller":"traceutil/trace.go:171","msg":"trace[993406204] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:1; response_revision:572; }","duration":"180.066826ms","start":"2024-01-15T10:38:39.854604Z","end":"2024-01-15T10:38:40.034671Z","steps":["trace[993406204] 'agreement among raft nodes before linearized reading'  (duration: 179.864193ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:38:40.034399Z","caller":"traceutil/trace.go:171","msg":"trace[1360197727] linearizableReadLoop","detail":"{readStateIndex:601; appliedIndex:600; }","duration":"179.220921ms","start":"2024-01-15T10:38:39.854622Z","end":"2024-01-15T10:38:40.033843Z","steps":["trace[1360197727] 'read index received'  (duration: 176.526673ms)","trace[1360197727] 'applied index is now lower than readState.Index'  (duration: 2.692911ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:38:40.039445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.98729ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:38:40.039795Z","caller":"traceutil/trace.go:171","msg":"trace[690659837] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:573; }","duration":"143.346301ms","start":"2024-01-15T10:38:39.896436Z","end":"2024-01-15T10:38:40.039783Z","steps":["trace[690659837] 'agreement among raft nodes before linearized reading'  (duration: 142.926917ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:38:40.039539Z","caller":"traceutil/trace.go:171","msg":"trace[2017555971] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"179.836449ms","start":"2024-01-15T10:38:39.85969Z","end":"2024-01-15T10:38:40.039526Z","steps":["trace[2017555971] 'process raft request'  (duration: 179.592239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:40.039712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.954294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:38:40.048836Z","caller":"traceutil/trace.go:171","msg":"trace[1403278608] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:573; }","duration":"121.077066ms","start":"2024-01-15T10:38:39.927747Z","end":"2024-01-15T10:38:40.048824Z","steps":["trace[1403278608] 'agreement among raft nodes before linearized reading'  (duration: 111.940116ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:48:34.839423Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":912}
	{"level":"info","ts":"2024-01-15T10:48:34.843566Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":912,"took":"3.608882ms","hash":3628706100}
	{"level":"info","ts":"2024-01-15T10:48:34.843646Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3628706100,"revision":912,"compact-revision":-1}
	
	
	==> kernel <==
	 10:52:11 up 14 min,  0 users,  load average: 0.08, 0.26, 0.17
	Linux embed-certs-781270 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] <==
	I0115 10:48:36.654649       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:48:37.654473       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:48:37.654688       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:48:37.654759       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:48:37.654613       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:48:37.654871       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:48:37.656082       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:49:36.513099       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:49:37.655572       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:49:37.655716       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:49:37.655802       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:49:37.656835       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:49:37.656958       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:49:37.657099       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:50:36.513459       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0115 10:51:36.513277       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:51:37.656525       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:51:37.656589       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:51:37.656598       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:51:37.657783       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:51:37.657877       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:51:37.657885       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] <==
	I0115 10:46:21.791680       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:46:51.388645       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:46:51.801694       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:47:21.395416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:47:21.810779       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:47:51.402047       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:47:51.821241       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:48:21.407789       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:48:21.831486       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:48:51.414371       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:48:51.840382       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:49:21.420107       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:49:21.849808       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:49:51.426063       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:49:51.859593       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0115 10:50:03.529112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="484.316µs"
	I0115 10:50:16.529229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="264.748µs"
	E0115 10:50:21.431901       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:50:21.870442       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:50:51.438288       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:50:51.880718       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:51:21.444464       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:51:21.891177       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:51:51.451165       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:51:51.900488       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] <==
	I0115 10:38:41.215364       1 server_others.go:69] "Using iptables proxy"
	I0115 10:38:41.231124       1 node.go:141] Successfully retrieved node IP: 192.168.72.222
	I0115 10:38:41.284972       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 10:38:41.285113       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 10:38:41.287793       1 server_others.go:152] "Using iptables Proxier"
	I0115 10:38:41.287867       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 10:38:41.288117       1 server.go:846] "Version info" version="v1.28.4"
	I0115 10:38:41.288297       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:38:41.288967       1 config.go:188] "Starting service config controller"
	I0115 10:38:41.289112       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 10:38:41.289151       1 config.go:97] "Starting endpoint slice config controller"
	I0115 10:38:41.289167       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 10:38:41.291381       1 config.go:315] "Starting node config controller"
	I0115 10:38:41.291417       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 10:38:41.389733       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 10:38:41.389806       1 shared_informer.go:318] Caches are synced for service config
	I0115 10:38:41.391888       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] <==
	I0115 10:38:33.832228       1 serving.go:348] Generated self-signed cert in-memory
	W0115 10:38:36.643592       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0115 10:38:36.643731       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 10:38:36.643742       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0115 10:38:36.643748       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0115 10:38:36.678245       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0115 10:38:36.678321       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:38:36.679681       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0115 10:38:36.679741       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 10:38:36.680475       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0115 10:38:36.680572       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0115 10:38:36.780756       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 10:37:59 UTC, ends at Mon 2024-01-15 10:52:11 UTC. --
	Jan 15 10:49:29 embed-certs-781270 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:49:29 embed-certs-781270 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:49:38 embed-certs-781270 kubelet[933]: E0115 10:49:38.510277     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:49:51 embed-certs-781270 kubelet[933]: E0115 10:49:51.521616     933 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 15 10:49:51 embed-certs-781270 kubelet[933]: E0115 10:49:51.521658     933 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 15 10:49:51 embed-certs-781270 kubelet[933]: E0115 10:49:51.521907     933 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lvhms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-wxclh_kube-system(2a52a963-a5dd-4ead-8da3-0d502c2c96ed): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 10:49:51 embed-certs-781270 kubelet[933]: E0115 10:49:51.521945     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:50:03 embed-certs-781270 kubelet[933]: E0115 10:50:03.511127     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:50:16 embed-certs-781270 kubelet[933]: E0115 10:50:16.511193     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:50:28 embed-certs-781270 kubelet[933]: E0115 10:50:28.511150     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:50:29 embed-certs-781270 kubelet[933]: E0115 10:50:29.530466     933 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:50:29 embed-certs-781270 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:50:29 embed-certs-781270 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:50:29 embed-certs-781270 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:50:39 embed-certs-781270 kubelet[933]: E0115 10:50:39.511568     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:50:52 embed-certs-781270 kubelet[933]: E0115 10:50:52.510661     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:51:06 embed-certs-781270 kubelet[933]: E0115 10:51:06.510414     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:51:17 embed-certs-781270 kubelet[933]: E0115 10:51:17.512319     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:51:29 embed-certs-781270 kubelet[933]: E0115 10:51:29.533474     933 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:51:29 embed-certs-781270 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:51:29 embed-certs-781270 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:51:29 embed-certs-781270 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:51:30 embed-certs-781270 kubelet[933]: E0115 10:51:30.510426     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:51:44 embed-certs-781270 kubelet[933]: E0115 10:51:44.510886     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:51:57 embed-certs-781270 kubelet[933]: E0115 10:51:57.510930     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	
	
	==> storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] <==
	I0115 10:39:11.876332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 10:39:11.895209       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 10:39:11.896434       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 10:39:29.309564       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 10:39:29.310223       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-781270_520f755a-e5fb-4fbd-936e-e4cf1c80df28!
	I0115 10:39:29.311432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"803c8693-7968-4a63-9365-703529c42c62", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-781270_520f755a-e5fb-4fbd-936e-e4cf1c80df28 became leader
	I0115 10:39:29.411166       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-781270_520f755a-e5fb-4fbd-936e-e4cf1c80df28!
	
	
	==> storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] <==
	I0115 10:38:41.211669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0115 10:39:11.214605       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-781270 -n embed-certs-781270
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-781270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wxclh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-781270 describe pod metrics-server-57f55c9bc5-wxclh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-781270 describe pod metrics-server-57f55c9bc5-wxclh: exit status 1 (63.65299ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wxclh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-781270 describe pod metrics-server-57f55c9bc5-wxclh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-15 10:52:22.882510626 +0000 UTC m=+5154.853678033
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-709012 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-709012 logs -n 25: (1.690403521s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-967423 -- sudo                         | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-967423                                 | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-317803                           | kubernetes-upgrade-317803    | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-824502             | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-206509        | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-781270            | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-802186 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | disable-driver-mounts-802186                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:32 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-709012  | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-206509             | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-824502                  | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-781270                 | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:33 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-709012       | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC | 15 Jan 24 10:43 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:34:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:34:59.863813   47063 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:34:59.864093   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864103   47063 out.go:309] Setting ErrFile to fd 2...
	I0115 10:34:59.864108   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864345   47063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:34:59.864916   47063 out.go:303] Setting JSON to false
	I0115 10:34:59.865821   47063 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4600,"bootTime":1705310300,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 10:34:59.865878   47063 start.go:138] virtualization: kvm guest
	I0115 10:34:59.868392   47063 out.go:177] * [default-k8s-diff-port-709012] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 10:34:59.869886   47063 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 10:34:59.869920   47063 notify.go:220] Checking for updates...
	I0115 10:34:59.871289   47063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:34:59.872699   47063 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:34:59.874242   47063 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:34:59.875739   47063 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 10:34:59.877248   47063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 10:34:59.879143   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:34:59.879618   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.879682   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.893745   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I0115 10:34:59.894091   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.894610   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.894633   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.894933   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.895112   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.895305   47063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:34:59.895579   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.895611   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.909045   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0115 10:34:59.909415   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.909868   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.909886   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.910173   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.910346   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.943453   47063 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 10:34:59.945154   47063 start.go:298] selected driver: kvm2
	I0115 10:34:59.945164   47063 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.945252   47063 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 10:34:59.945926   47063 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.945991   47063 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 10:34:59.959656   47063 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 10:34:59.960028   47063 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 10:34:59.960078   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:34:59.960091   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:34:59.960106   47063 start_flags.go:321] config:
	{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-70901
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.960261   47063 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.962534   47063 out.go:177] * Starting control plane node default-k8s-diff-port-709012 in cluster default-k8s-diff-port-709012
	I0115 10:35:00.734685   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:34:59.963970   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:34:59.964003   47063 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 10:34:59.964012   47063 cache.go:56] Caching tarball of preloaded images
	I0115 10:34:59.964081   47063 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 10:34:59.964090   47063 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:34:59.964172   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:34:59.964356   47063 start.go:365] acquiring machines lock for default-k8s-diff-port-709012: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:35:06.814638   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:09.886665   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:15.966704   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:19.038663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:25.118649   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:28.190674   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:34.270660   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:37.342618   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:43.422663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:46.494729   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:52.574698   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:55.646737   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:01.726677   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:04.798681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:10.878645   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:13.950716   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:20.030691   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:23.102681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:29.182668   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:32.254641   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:38.334686   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:41.406690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:47.486639   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:50.558690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:56.638684   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:59.710581   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:05.790664   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:08.862738   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:14.942615   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:18.014720   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:24.094644   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:27.098209   46387 start.go:369] acquired machines lock for "old-k8s-version-206509" in 4m37.373222591s
	I0115 10:37:27.098259   46387 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:27.098264   46387 fix.go:54] fixHost starting: 
	I0115 10:37:27.098603   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:27.098633   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:27.112818   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37153
	I0115 10:37:27.113206   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:27.113638   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:37:27.113660   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:27.113943   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:27.114126   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:27.114270   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:37:27.115824   46387 fix.go:102] recreateIfNeeded on old-k8s-version-206509: state=Stopped err=<nil>
	I0115 10:37:27.115846   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	W0115 10:37:27.116007   46387 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:27.118584   46387 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-206509" ...
	I0115 10:37:27.119985   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Start
	I0115 10:37:27.120145   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring networks are active...
	I0115 10:37:27.120788   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network default is active
	I0115 10:37:27.121077   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network mk-old-k8s-version-206509 is active
	I0115 10:37:27.121463   46387 main.go:141] libmachine: (old-k8s-version-206509) Getting domain xml...
	I0115 10:37:27.122185   46387 main.go:141] libmachine: (old-k8s-version-206509) Creating domain...
	I0115 10:37:28.295990   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting to get IP...
	I0115 10:37:28.297038   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.297393   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.297470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.297380   47440 retry.go:31] will retry after 254.616903ms: waiting for machine to come up
	I0115 10:37:28.553730   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.554213   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.554238   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.554159   47440 retry.go:31] will retry after 350.995955ms: waiting for machine to come up
	I0115 10:37:28.906750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.907189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.907222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.907146   47440 retry.go:31] will retry after 441.292217ms: waiting for machine to come up
	I0115 10:37:29.349643   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.350011   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.350042   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.349959   47440 retry.go:31] will retry after 544.431106ms: waiting for machine to come up
	I0115 10:37:27.096269   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:27.096303   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:37:27.098084   46388 machine.go:91] provisioned docker machine in 4m37.366643974s
	I0115 10:37:27.098120   46388 fix.go:56] fixHost completed within 4m37.388460167s
	I0115 10:37:27.098126   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 4m37.388479036s
	W0115 10:37:27.098153   46388 start.go:694] error starting host: provision: host is not running
	W0115 10:37:27.098242   46388 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0115 10:37:27.098252   46388 start.go:709] Will try again in 5 seconds ...
	I0115 10:37:29.895609   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.896157   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.896189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.896032   47440 retry.go:31] will retry after 489.420436ms: waiting for machine to come up
	I0115 10:37:30.386614   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:30.387037   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:30.387071   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:30.387005   47440 retry.go:31] will retry after 779.227065ms: waiting for machine to come up
	I0115 10:37:31.167934   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:31.168316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:31.168343   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:31.168273   47440 retry.go:31] will retry after 878.328646ms: waiting for machine to come up
	I0115 10:37:32.048590   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:32.048976   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:32.049001   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:32.048920   47440 retry.go:31] will retry after 1.282650862s: waiting for machine to come up
	I0115 10:37:33.333699   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:33.334132   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:33.334161   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:33.334078   47440 retry.go:31] will retry after 1.548948038s: waiting for machine to come up
	I0115 10:37:32.100253   46388 start.go:365] acquiring machines lock for no-preload-824502: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:37:34.884455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:34.884845   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:34.884866   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:34.884800   47440 retry.go:31] will retry after 1.555315627s: waiting for machine to come up
	I0115 10:37:36.441833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:36.442329   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:36.442352   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:36.442281   47440 retry.go:31] will retry after 1.803564402s: waiting for machine to come up
	I0115 10:37:38.247833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:38.248241   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:38.248283   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:38.248213   47440 retry.go:31] will retry after 3.514521425s: waiting for machine to come up
	I0115 10:37:41.766883   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:41.767187   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:41.767222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:41.767154   47440 retry.go:31] will retry after 4.349871716s: waiting for machine to come up
	I0115 10:37:47.571869   46584 start.go:369] acquired machines lock for "embed-certs-781270" in 4m40.757219204s
	I0115 10:37:47.571928   46584 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:47.571936   46584 fix.go:54] fixHost starting: 
	I0115 10:37:47.572344   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:47.572382   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:47.591532   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0115 10:37:47.591905   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:47.592471   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:37:47.592513   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:47.592835   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:47.593060   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:37:47.593221   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:37:47.594825   46584 fix.go:102] recreateIfNeeded on embed-certs-781270: state=Stopped err=<nil>
	I0115 10:37:47.594856   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	W0115 10:37:47.595015   46584 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:47.597457   46584 out.go:177] * Restarting existing kvm2 VM for "embed-certs-781270" ...
	I0115 10:37:46.118479   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.118936   46387 main.go:141] libmachine: (old-k8s-version-206509) Found IP for machine: 192.168.61.70
	I0115 10:37:46.118960   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserving static IP address...
	I0115 10:37:46.118978   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has current primary IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.119402   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.119425   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserved static IP address: 192.168.61.70
	I0115 10:37:46.119441   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | skip adding static IP to network mk-old-k8s-version-206509 - found existing host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"}
	I0115 10:37:46.119455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Getting to WaitForSSH function...
	I0115 10:37:46.119467   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting for SSH to be available...
	I0115 10:37:46.121874   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122204   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.122236   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122340   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH client type: external
	I0115 10:37:46.122364   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa (-rw-------)
	I0115 10:37:46.122452   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:37:46.122476   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | About to run SSH command:
	I0115 10:37:46.122492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | exit 0
	I0115 10:37:46.214102   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | SSH cmd err, output: <nil>: 
	I0115 10:37:46.214482   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetConfigRaw
	I0115 10:37:46.215064   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.217294   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217579   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.217618   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217784   46387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/config.json ...
	I0115 10:37:46.218001   46387 machine.go:88] provisioning docker machine ...
	I0115 10:37:46.218022   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:46.218242   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218440   46387 buildroot.go:166] provisioning hostname "old-k8s-version-206509"
	I0115 10:37:46.218462   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218593   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.220842   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221188   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.221226   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221374   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.221525   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221662   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221760   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.221905   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.222391   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.222411   46387 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-206509 && echo "old-k8s-version-206509" | sudo tee /etc/hostname
	I0115 10:37:46.354906   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-206509
	
	I0115 10:37:46.354939   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.357679   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358051   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.358089   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358245   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.358470   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358642   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358799   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.358957   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.359291   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.359318   46387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-206509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-206509/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-206509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:37:46.491369   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:46.491397   46387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:37:46.491413   46387 buildroot.go:174] setting up certificates
	I0115 10:37:46.491422   46387 provision.go:83] configureAuth start
	I0115 10:37:46.491430   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.491687   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.494369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.494779   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494863   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.496985   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497338   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.497368   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497537   46387 provision.go:138] copyHostCerts
	I0115 10:37:46.497598   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:37:46.497613   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:37:46.497694   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:37:46.497806   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:37:46.497818   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:37:46.497848   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:37:46.497925   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:37:46.497945   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:37:46.497982   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:37:46.498043   46387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-206509 san=[192.168.61.70 192.168.61.70 localhost 127.0.0.1 minikube old-k8s-version-206509]
	I0115 10:37:46.824648   46387 provision.go:172] copyRemoteCerts
	I0115 10:37:46.824702   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:37:46.824723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.827470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827785   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.827818   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827972   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.828174   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.828336   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.828484   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:46.919822   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:37:46.941728   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:37:46.963042   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0115 10:37:46.983757   46387 provision.go:86] duration metric: configureAuth took 492.325875ms
	I0115 10:37:46.983777   46387 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:37:46.983966   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:37:46.984048   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.986525   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.986843   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.986869   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.987107   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.987323   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987503   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987651   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.987795   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.988198   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.988219   46387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:37:47.308225   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:37:47.308256   46387 machine.go:91] provisioned docker machine in 1.090242192s
	I0115 10:37:47.308269   46387 start.go:300] post-start starting for "old-k8s-version-206509" (driver="kvm2")
	I0115 10:37:47.308284   46387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:37:47.308310   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.308641   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:37:47.308674   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.311316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311665   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.311700   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311835   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.312024   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.312190   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.312315   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.407169   46387 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:37:47.411485   46387 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:37:47.411504   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:37:47.411566   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:37:47.411637   46387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:37:47.411715   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:37:47.419976   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:47.446992   46387 start.go:303] post-start completed in 138.700951ms
	I0115 10:37:47.447013   46387 fix.go:56] fixHost completed within 20.348748891s
	I0115 10:37:47.447031   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.449638   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.449996   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.450048   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.450136   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.450309   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450620   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.450749   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:47.451070   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:47.451085   46387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:37:47.571711   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315067.520557177
	
	I0115 10:37:47.571729   46387 fix.go:206] guest clock: 1705315067.520557177
	I0115 10:37:47.571748   46387 fix.go:219] Guest: 2024-01-15 10:37:47.520557177 +0000 UTC Remote: 2024-01-15 10:37:47.447016864 +0000 UTC m=+297.904172196 (delta=73.540313ms)
	I0115 10:37:47.571772   46387 fix.go:190] guest clock delta is within tolerance: 73.540313ms
	I0115 10:37:47.571782   46387 start.go:83] releasing machines lock for "old-k8s-version-206509", held for 20.473537585s
	I0115 10:37:47.571810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.572157   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:47.574952   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575328   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.575366   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.575957   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576146   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576232   46387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:37:47.576273   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.576381   46387 ssh_runner.go:195] Run: cat /version.json
	I0115 10:37:47.576406   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.578863   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579052   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579218   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579248   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579347   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579378   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579385   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579577   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579583   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579775   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.579810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579912   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.580094   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.580316   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.702555   46387 ssh_runner.go:195] Run: systemctl --version
	I0115 10:37:47.708309   46387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:37:47.862103   46387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:37:47.869243   46387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:37:47.869321   46387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:37:47.886013   46387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:37:47.886033   46387 start.go:475] detecting cgroup driver to use...
	I0115 10:37:47.886093   46387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:37:47.901265   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:37:47.913762   46387 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:37:47.913815   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:37:47.926880   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:37:47.942744   46387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:37:48.050667   46387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:37:48.168614   46387 docker.go:233] disabling docker service ...
	I0115 10:37:48.168679   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:37:48.181541   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:37:48.193155   46387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:37:48.312374   46387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:37:48.420624   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:37:48.432803   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:37:48.449232   46387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0115 10:37:48.449292   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.458042   46387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:37:48.458109   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.466909   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.475511   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.484081   46387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:37:48.493186   46387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:37:48.502460   46387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:37:48.502507   46387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:37:48.514913   46387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:37:48.522816   46387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:37:48.630774   46387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:37:48.807089   46387 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:37:48.807170   46387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:37:48.812950   46387 start.go:543] Will wait 60s for crictl version
	I0115 10:37:48.813005   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:48.816919   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:37:48.860058   46387 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:37:48.860143   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.916839   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.968312   46387 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0115 10:37:48.969913   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:48.972776   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973219   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:48.973249   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973519   46387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0115 10:37:48.977593   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:48.990551   46387 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 10:37:48.990613   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:49.030917   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:49.030973   46387 ssh_runner.go:195] Run: which lz4
	I0115 10:37:49.035059   46387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:37:49.039231   46387 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:37:49.039262   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0115 10:37:47.598904   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Start
	I0115 10:37:47.599102   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring networks are active...
	I0115 10:37:47.599886   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network default is active
	I0115 10:37:47.600258   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network mk-embed-certs-781270 is active
	I0115 10:37:47.600652   46584 main.go:141] libmachine: (embed-certs-781270) Getting domain xml...
	I0115 10:37:47.601365   46584 main.go:141] libmachine: (embed-certs-781270) Creating domain...
	I0115 10:37:48.842510   46584 main.go:141] libmachine: (embed-certs-781270) Waiting to get IP...
	I0115 10:37:48.843267   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:48.843637   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:48.843731   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:48.843603   47574 retry.go:31] will retry after 262.69562ms: waiting for machine to come up
	I0115 10:37:49.108361   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.108861   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.108901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.108796   47574 retry.go:31] will retry after 379.820541ms: waiting for machine to come up
	I0115 10:37:49.490343   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.490939   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.490979   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.490898   47574 retry.go:31] will retry after 463.282743ms: waiting for machine to come up
	I0115 10:37:49.956222   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.956694   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.956725   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.956646   47574 retry.go:31] will retry after 539.780461ms: waiting for machine to come up
	I0115 10:37:50.498391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:50.498901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:50.498935   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:50.498849   47574 retry.go:31] will retry after 611.580301ms: waiting for machine to come up
	I0115 10:37:51.111752   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.112228   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.112263   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.112194   47574 retry.go:31] will retry after 837.335782ms: waiting for machine to come up
	I0115 10:37:50.824399   46387 crio.go:444] Took 1.789376 seconds to copy over tarball
	I0115 10:37:50.824466   46387 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:37:53.837707   46387 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013210203s)
	I0115 10:37:53.837742   46387 crio.go:451] Took 3.013322 seconds to extract the tarball
	I0115 10:37:53.837753   46387 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:37:53.876939   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:53.922125   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:53.922161   46387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:37:53.922213   46387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:53.922249   46387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.922267   46387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.922300   46387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.922520   46387 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.922527   46387 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.922544   46387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.922547   46387 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.923794   46387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.923809   46387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.923811   46387 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.923807   46387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.923785   46387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.923843   46387 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.083650   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0115 10:37:54.090328   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.095213   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.123642   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.124012   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.139399   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.139406   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.207117   46387 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0115 10:37:54.207170   46387 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0115 10:37:54.207168   46387 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0115 10:37:54.207202   46387 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.207230   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.207248   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.248774   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.269586   46387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0115 10:37:54.269636   46387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.269661   46387 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0115 10:37:54.269693   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.269693   46387 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.269785   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404758   46387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0115 10:37:54.404862   46387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0115 10:37:54.404907   46387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.404969   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0115 10:37:54.404996   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404873   46387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.405034   46387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0115 10:37:54.405064   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404975   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.405082   46387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.405174   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.405202   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.405149   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.502357   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.502402   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0115 10:37:54.502507   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0115 10:37:54.502547   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.502504   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.502620   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0115 10:37:54.510689   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0115 10:37:54.577797   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0115 10:37:54.577854   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0115 10:37:54.577885   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0115 10:37:54.577945   46387 cache_images.go:92] LoadImages completed in 655.770059ms
	W0115 10:37:54.578019   46387 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0115 10:37:54.578091   46387 ssh_runner.go:195] Run: crio config
	I0115 10:37:51.950759   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.951289   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.951322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.951237   47574 retry.go:31] will retry after 817.063291ms: waiting for machine to come up
	I0115 10:37:52.770506   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:52.771015   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:52.771043   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:52.770977   47574 retry.go:31] will retry after 1.000852987s: waiting for machine to come up
	I0115 10:37:53.774011   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:53.774478   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:53.774518   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:53.774452   47574 retry.go:31] will retry after 1.171113667s: waiting for machine to come up
	I0115 10:37:54.947562   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:54.947925   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:54.947951   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:54.947887   47574 retry.go:31] will retry after 1.982035367s: waiting for machine to come up
	I0115 10:37:54.646104   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:37:54.750728   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:37:54.750754   46387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:37:54.750779   46387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-206509 NodeName:old-k8s-version-206509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 10:37:54.750935   46387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-206509"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-206509
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:37:54.751014   46387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-206509 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:37:54.751063   46387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0115 10:37:54.761568   46387 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:37:54.761645   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:37:54.771892   46387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0115 10:37:54.788678   46387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:37:54.804170   46387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0115 10:37:54.820285   46387 ssh_runner.go:195] Run: grep 192.168.61.70	control-plane.minikube.internal$ /etc/hosts
	I0115 10:37:54.823831   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:54.834806   46387 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509 for IP: 192.168.61.70
	I0115 10:37:54.834838   46387 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:37:54.835023   46387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:37:54.835070   46387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:37:54.835136   46387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/client.key
	I0115 10:37:54.835190   46387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key.99472042
	I0115 10:37:54.835249   46387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key
	I0115 10:37:54.835356   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:37:54.835392   46387 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:37:54.835401   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:37:54.835439   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:37:54.835467   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:37:54.835491   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:37:54.835531   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:54.836204   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:37:54.859160   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 10:37:54.884674   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:37:54.907573   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:37:54.930846   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:37:54.953329   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:37:54.975335   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:37:54.997505   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:37:55.020494   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:37:55.042745   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:37:55.064085   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:37:55.085243   46387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:37:55.101189   46387 ssh_runner.go:195] Run: openssl version
	I0115 10:37:55.106849   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:37:55.118631   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123477   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123545   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.129290   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:37:55.141464   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:37:55.153514   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157901   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157967   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.163557   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:37:55.173419   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:37:55.184850   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189454   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189508   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.194731   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:37:55.205634   46387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:37:55.209881   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:37:55.215521   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:37:55.221031   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:37:55.226730   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:37:55.232566   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:37:55.238251   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:37:55.244098   46387 kubeadm.go:404] StartCluster: {Name:old-k8s-version-206509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:37:55.244188   46387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:37:55.244243   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:37:55.293223   46387 cri.go:89] found id: ""
	I0115 10:37:55.293296   46387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:37:55.305374   46387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:37:55.305403   46387 kubeadm.go:636] restartCluster start
	I0115 10:37:55.305477   46387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:37:55.314925   46387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.316564   46387 kubeconfig.go:92] found "old-k8s-version-206509" server: "https://192.168.61.70:8443"
	I0115 10:37:55.319961   46387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:37:55.329062   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.329148   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.340866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.829433   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.829549   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.843797   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.329336   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.329436   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.343947   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.829507   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.829623   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.843692   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.329438   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.329522   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.341416   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.830063   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.830153   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.844137   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.329648   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.329743   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.342211   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.829792   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.829891   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.842397   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:59.330122   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.330202   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.346667   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.931004   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:56.931428   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:56.931461   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:56.931364   47574 retry.go:31] will retry after 2.358737657s: waiting for machine to come up
	I0115 10:37:59.292322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:59.292784   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:59.292817   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:59.292726   47574 retry.go:31] will retry after 2.808616591s: waiting for machine to come up
	I0115 10:37:59.829162   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.829242   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.844148   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.329799   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.329901   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.345118   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.829706   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.829806   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.845105   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.329598   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.329678   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.341872   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.829350   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.829424   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.843987   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.329874   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.329944   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.342152   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.829617   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.829711   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.841636   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.329206   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.329306   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.341373   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.829987   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.830080   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.842151   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:04.329957   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.330047   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.342133   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.103667   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:02.104098   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:02.104127   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:02.104058   47574 retry.go:31] will retry after 2.823867183s: waiting for machine to come up
	I0115 10:38:04.931219   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:04.931550   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:04.931594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:04.931523   47574 retry.go:31] will retry after 4.042933854s: waiting for machine to come up
	I0115 10:38:04.829477   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.829599   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.841546   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.329351   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:05.329417   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:05.341866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.341892   46387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:05.341900   46387 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:05.341910   46387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:05.342037   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:05.376142   46387 cri.go:89] found id: ""
	I0115 10:38:05.376206   46387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:05.391778   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:05.402262   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:05.402331   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411457   46387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411489   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:05.526442   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.239898   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.449098   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.515862   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.598545   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:06.598653   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.099595   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.599677   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.099492   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.599629   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.627737   46387 api_server.go:72] duration metric: took 2.029196375s to wait for apiserver process to appear ...
	I0115 10:38:08.627766   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:08.627803   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.199201   47063 start.go:369] acquired machines lock for "default-k8s-diff-port-709012" in 3m10.23481312s
	I0115 10:38:10.199261   47063 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:10.199269   47063 fix.go:54] fixHost starting: 
	I0115 10:38:10.199630   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:10.199667   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:10.215225   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0115 10:38:10.215627   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:10.216040   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:10.216068   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:10.216372   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:10.216583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:10.216829   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:10.218454   47063 fix.go:102] recreateIfNeeded on default-k8s-diff-port-709012: state=Stopped err=<nil>
	I0115 10:38:10.218482   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	W0115 10:38:10.218676   47063 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:10.220860   47063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-709012" ...
	I0115 10:38:08.976035   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976545   46584 main.go:141] libmachine: (embed-certs-781270) Found IP for machine: 192.168.72.222
	I0115 10:38:08.976574   46584 main.go:141] libmachine: (embed-certs-781270) Reserving static IP address...
	I0115 10:38:08.976592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has current primary IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976946   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.976980   46584 main.go:141] libmachine: (embed-certs-781270) DBG | skip adding static IP to network mk-embed-certs-781270 - found existing host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"}
	I0115 10:38:08.976997   46584 main.go:141] libmachine: (embed-certs-781270) Reserved static IP address: 192.168.72.222
	I0115 10:38:08.977017   46584 main.go:141] libmachine: (embed-certs-781270) Waiting for SSH to be available...
	I0115 10:38:08.977033   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Getting to WaitForSSH function...
	I0115 10:38:08.979155   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979456   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.979483   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979609   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH client type: external
	I0115 10:38:08.979658   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa (-rw-------)
	I0115 10:38:08.979699   46584 main.go:141] libmachine: (embed-certs-781270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:08.979718   46584 main.go:141] libmachine: (embed-certs-781270) DBG | About to run SSH command:
	I0115 10:38:08.979734   46584 main.go:141] libmachine: (embed-certs-781270) DBG | exit 0
	I0115 10:38:09.082171   46584 main.go:141] libmachine: (embed-certs-781270) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:09.082546   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetConfigRaw
	I0115 10:38:09.083235   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.085481   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.085845   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.085873   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.086115   46584 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/config.json ...
	I0115 10:38:09.086309   46584 machine.go:88] provisioning docker machine ...
	I0115 10:38:09.086331   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.086549   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086714   46584 buildroot.go:166] provisioning hostname "embed-certs-781270"
	I0115 10:38:09.086736   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086884   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.089346   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089702   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.089727   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.090035   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090180   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090319   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.090464   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.090845   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.090862   46584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-781270 && echo "embed-certs-781270" | sudo tee /etc/hostname
	I0115 10:38:09.240609   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-781270
	
	I0115 10:38:09.240643   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.243233   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243586   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.243616   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243764   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.243976   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244292   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.244453   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.244774   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.244800   46584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-781270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-781270/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-781270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:09.388902   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:09.388932   46584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:09.388968   46584 buildroot.go:174] setting up certificates
	I0115 10:38:09.388981   46584 provision.go:83] configureAuth start
	I0115 10:38:09.388998   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.389254   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.392236   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392603   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.392643   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392750   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.395249   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395596   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.395629   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395797   46584 provision.go:138] copyHostCerts
	I0115 10:38:09.395858   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:09.395872   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:09.395939   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:09.396037   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:09.396045   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:09.396067   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:09.396134   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:09.396141   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:09.396159   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:09.396212   46584 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.embed-certs-781270 san=[192.168.72.222 192.168.72.222 localhost 127.0.0.1 minikube embed-certs-781270]
	I0115 10:38:09.457000   46584 provision.go:172] copyRemoteCerts
	I0115 10:38:09.457059   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:09.457081   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.459709   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460074   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.460102   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460356   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.460522   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.460681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.460798   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:09.556211   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:09.578947   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:09.601191   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:09.623814   46584 provision.go:86] duration metric: configureAuth took 234.815643ms
	I0115 10:38:09.623844   46584 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:09.624070   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:09.624157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.626592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.626930   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.626972   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.627141   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.627326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627492   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627607   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.627755   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.628058   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.628086   46584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:09.931727   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:09.931765   46584 machine.go:91] provisioned docker machine in 845.442044ms
	I0115 10:38:09.931777   46584 start.go:300] post-start starting for "embed-certs-781270" (driver="kvm2")
	I0115 10:38:09.931790   46584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:09.931810   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.932100   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:09.932130   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.934487   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934811   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.934836   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934999   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.935160   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.935313   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.935480   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.028971   46584 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:10.032848   46584 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:10.032871   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:10.032955   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:10.033045   46584 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:10.033162   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:10.042133   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:10.064619   46584 start.go:303] post-start completed in 132.827155ms
	I0115 10:38:10.064658   46584 fix.go:56] fixHost completed within 22.492708172s
	I0115 10:38:10.064681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.067323   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067651   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.067675   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067812   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.068037   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068272   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068449   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.068587   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:10.068904   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:10.068919   46584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:10.199025   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315090.148648598
	
	I0115 10:38:10.199045   46584 fix.go:206] guest clock: 1705315090.148648598
	I0115 10:38:10.199053   46584 fix.go:219] Guest: 2024-01-15 10:38:10.148648598 +0000 UTC Remote: 2024-01-15 10:38:10.064662616 +0000 UTC m=+303.401739583 (delta=83.985982ms)
	I0115 10:38:10.199088   46584 fix.go:190] guest clock delta is within tolerance: 83.985982ms
	I0115 10:38:10.199096   46584 start.go:83] releasing machines lock for "embed-certs-781270", held for 22.627192785s
	I0115 10:38:10.199122   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.199368   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:10.201962   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202349   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.202389   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202603   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203417   46584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:10.203461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.203546   46584 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:10.203570   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.206022   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206257   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206371   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206400   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.206673   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206700   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206768   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.206910   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.206911   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.207087   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.207191   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.207335   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.207465   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.327677   46584 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:10.333127   46584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:10.473183   46584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:10.480054   46584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:10.480115   46584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:10.494367   46584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:10.494388   46584 start.go:475] detecting cgroup driver to use...
	I0115 10:38:10.494463   46584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:10.508327   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:10.519950   46584 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:10.520003   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:10.531743   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:10.544980   46584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:10.650002   46584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:10.767145   46584 docker.go:233] disabling docker service ...
	I0115 10:38:10.767214   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:10.782073   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:10.796419   46584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:10.913422   46584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:11.016113   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:11.032638   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:11.053360   46584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:11.053415   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.064008   46584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:11.064067   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.074353   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.084486   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.093962   46584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:11.105487   46584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:11.117411   46584 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:11.117469   46584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:11.133780   46584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:11.145607   46584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:11.257012   46584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:11.437979   46584 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:11.438050   46584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:11.445814   46584 start.go:543] Will wait 60s for crictl version
	I0115 10:38:11.445896   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:38:11.449770   46584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:11.491895   46584 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:11.491985   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.543656   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.609733   46584 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:11.611238   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:11.614594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.614947   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:11.614988   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.615225   46584 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:11.619516   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:11.635101   46584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:11.635170   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:11.675417   46584 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:11.675504   46584 ssh_runner.go:195] Run: which lz4
	I0115 10:38:11.679733   46584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:11.683858   46584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:11.683889   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:13.628977   46387 api_server.go:269] stopped: https://192.168.61.70:8443/healthz: Get "https://192.168.61.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0115 10:38:13.629022   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.222501   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Start
	I0115 10:38:10.222694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring networks are active...
	I0115 10:38:10.223335   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network default is active
	I0115 10:38:10.225164   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network mk-default-k8s-diff-port-709012 is active
	I0115 10:38:10.225189   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Getting domain xml...
	I0115 10:38:10.225201   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Creating domain...
	I0115 10:38:11.529205   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting to get IP...
	I0115 10:38:11.530265   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530808   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530886   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.530786   47689 retry.go:31] will retry after 220.836003ms: waiting for machine to come up
	I0115 10:38:11.753500   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754152   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.754119   47689 retry.go:31] will retry after 288.710195ms: waiting for machine to come up
	I0115 10:38:12.044613   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045149   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045179   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.045065   47689 retry.go:31] will retry after 321.962888ms: waiting for machine to come up
	I0115 10:38:12.368694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369119   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369171   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.369075   47689 retry.go:31] will retry after 457.128837ms: waiting for machine to come up
	I0115 10:38:12.827574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828079   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828108   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.828011   47689 retry.go:31] will retry after 524.042929ms: waiting for machine to come up
	I0115 10:38:13.353733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354288   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354315   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:13.354237   47689 retry.go:31] will retry after 885.937378ms: waiting for machine to come up
	I0115 10:38:14.241653   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242258   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:14.242185   47689 retry.go:31] will retry after 1.168061338s: waiting for machine to come up
	I0115 10:38:14.984346   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:14.984377   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:14.984395   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.129596   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:15.129627   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:15.129650   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.224825   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.224852   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:15.628377   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.666573   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.666642   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:16.128080   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:16.148642   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:38:16.156904   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:38:16.156927   46387 api_server.go:131] duration metric: took 7.529154555s to wait for apiserver health ...
	I0115 10:38:16.156936   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:38:16.156942   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:16.159248   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:13.665699   46584 crio.go:444] Took 1.986003 seconds to copy over tarball
	I0115 10:38:13.665769   46584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:16.702911   46584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.037102789s)
	I0115 10:38:16.702954   46584 crio.go:451] Took 3.037230 seconds to extract the tarball
	I0115 10:38:16.702966   46584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:16.160810   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:16.173072   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:16.205009   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:16.216599   46387 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:16.216637   46387 system_pods.go:61] "coredns-5644d7b6d9-5qcrz" [3fc31c2b-9c3f-4167-8b3f-bbe262591a90] Running
	I0115 10:38:16.216645   46387 system_pods.go:61] "coredns-5644d7b6d9-rgrbc" [1c2c2a33-f329-4cb3-8e05-900a252ceed3] Running
	I0115 10:38:16.216651   46387 system_pods.go:61] "etcd-old-k8s-version-206509" [8c2919cc-4b82-4387-be0d-f3decf4b324b] Running
	I0115 10:38:16.216658   46387 system_pods.go:61] "kube-apiserver-old-k8s-version-206509" [51e63cf2-5728-471d-b447-3f3aa9454ac7] Running
	I0115 10:38:16.216663   46387 system_pods.go:61] "kube-controller-manager-old-k8s-version-206509" [6dec6bf0-ce5d-4f87-8bf7-c774214eb8ea] Running
	I0115 10:38:16.216668   46387 system_pods.go:61] "kube-proxy-w9fdn" [42b28054-8876-4854-a041-62be5688c1c2] Running
	I0115 10:38:16.216675   46387 system_pods.go:61] "kube-scheduler-old-k8s-version-206509" [7a50352c-2129-4de4-84e8-3cb5d8ccd463] Running
	I0115 10:38:16.216681   46387 system_pods.go:61] "storage-provisioner" [f341413b-8261-4a78-9f28-449be173cf19] Running
	I0115 10:38:16.216690   46387 system_pods.go:74] duration metric: took 11.655731ms to wait for pod list to return data ...
	I0115 10:38:16.216703   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:16.220923   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:16.220962   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:16.220978   46387 node_conditions.go:105] duration metric: took 4.267954ms to run NodePressure ...
	I0115 10:38:16.221005   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:16.519042   46387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:16.523772   46387 retry.go:31] will retry after 264.775555ms: kubelet not initialised
	I0115 10:38:17.172203   46387 retry.go:31] will retry after 553.077445ms: kubelet not initialised
	I0115 10:38:18.053202   46387 retry.go:31] will retry after 653.279352ms: kubelet not initialised
	I0115 10:38:18.837753   46387 retry.go:31] will retry after 692.673954ms: kubelet not initialised
	I0115 10:38:19.596427   46387 retry.go:31] will retry after 679.581071ms: kubelet not initialised
	I0115 10:38:15.412204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412706   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412766   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:15.412670   47689 retry.go:31] will retry after 895.041379ms: waiting for machine to come up
	I0115 10:38:16.309188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309764   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:16.309692   47689 retry.go:31] will retry after 1.593821509s: waiting for machine to come up
	I0115 10:38:17.904625   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905131   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905168   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:17.905073   47689 retry.go:31] will retry after 2.002505122s: waiting for machine to come up
	I0115 10:38:16.745093   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:17.184204   46584 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:17.184235   46584 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:17.184325   46584 ssh_runner.go:195] Run: crio config
	I0115 10:38:17.249723   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:17.249748   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:17.249764   46584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:17.249782   46584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.222 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-781270 NodeName:embed-certs-781270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:17.249936   46584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-781270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:17.250027   46584 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-781270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:38:17.250091   46584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:17.262237   46584 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:17.262313   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:17.273370   46584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0115 10:38:17.292789   46584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:17.312254   46584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0115 10:38:17.332121   46584 ssh_runner.go:195] Run: grep 192.168.72.222	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:17.336199   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:17.349009   46584 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270 for IP: 192.168.72.222
	I0115 10:38:17.349047   46584 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:17.349200   46584 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:17.349246   46584 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:17.349316   46584 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/client.key
	I0115 10:38:17.685781   46584 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key.4e007618
	I0115 10:38:17.685874   46584 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key
	I0115 10:38:17.685990   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:17.686022   46584 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:17.686033   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:17.686054   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:17.686085   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:17.686107   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:17.686147   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:17.686866   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:17.713652   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:17.744128   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:17.771998   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:17.796880   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:17.822291   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:17.848429   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:17.874193   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:17.898873   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:17.922742   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:17.945123   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:17.967188   46584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:17.983237   46584 ssh_runner.go:195] Run: openssl version
	I0115 10:38:17.988658   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:17.998141   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002462   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002521   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.008136   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:18.017766   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:18.027687   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032418   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032479   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.038349   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:18.048395   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:18.058675   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063369   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063441   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.068886   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:18.078459   46584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:18.083181   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:18.089264   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:18.095399   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:18.101292   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:18.107113   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:18.112791   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:18.118337   46584 kubeadm.go:404] StartCluster: {Name:embed-certs-781270 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:18.118561   46584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:18.118611   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:18.162363   46584 cri.go:89] found id: ""
	I0115 10:38:18.162454   46584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:18.172261   46584 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:18.172286   46584 kubeadm.go:636] restartCluster start
	I0115 10:38:18.172357   46584 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:18.181043   46584 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.182845   46584 kubeconfig.go:92] found "embed-certs-781270" server: "https://192.168.72.222:8443"
	I0115 10:38:18.186506   46584 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:18.194997   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.195069   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.205576   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.695105   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.695200   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.709836   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.195362   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.195533   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.210585   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.695088   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.695201   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.710436   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.196063   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.196145   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.211948   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.695433   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.695545   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.710981   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.195510   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.195588   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.206769   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.695111   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.695192   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.706765   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.288898   46387 retry.go:31] will retry after 1.97886626s: kubelet not initialised
	I0115 10:38:22.273756   46387 retry.go:31] will retry after 2.35083465s: kubelet not initialised
	I0115 10:38:19.909015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909598   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:19.909539   47689 retry.go:31] will retry after 2.883430325s: waiting for machine to come up
	I0115 10:38:22.794280   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794702   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794729   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:22.794660   47689 retry.go:31] will retry after 3.219865103s: waiting for machine to come up
	I0115 10:38:22.195343   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.195454   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.210740   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:22.695835   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.695900   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.710247   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.195555   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.195633   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.207117   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.695569   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.695632   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.706867   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.195323   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.195428   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.207679   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.695971   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.696049   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.708342   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.195900   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.195994   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.207896   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.695417   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.695490   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.706180   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.195799   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.195890   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.206859   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.695558   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.695648   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.706652   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.630486   46387 retry.go:31] will retry after 5.638904534s: kubelet not initialised
	I0115 10:38:26.016121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016496   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016520   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:26.016463   47689 retry.go:31] will retry after 3.426285557s: waiting for machine to come up
	I0115 10:38:29.447165   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447643   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Found IP for machine: 192.168.39.125
	I0115 10:38:29.447678   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has current primary IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447719   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserving static IP address...
	I0115 10:38:29.448146   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.448172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | skip adding static IP to network mk-default-k8s-diff-port-709012 - found existing host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"}
	I0115 10:38:29.448183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserved static IP address: 192.168.39.125
	I0115 10:38:29.448204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for SSH to be available...
	I0115 10:38:29.448215   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Getting to WaitForSSH function...
	I0115 10:38:29.450376   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450690   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.450715   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH client type: external
	I0115 10:38:29.450867   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa (-rw-------)
	I0115 10:38:29.450899   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:29.450909   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | About to run SSH command:
	I0115 10:38:29.450919   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | exit 0
	I0115 10:38:29.550560   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:29.550940   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetConfigRaw
	I0115 10:38:29.551686   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.554629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555085   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.555117   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555426   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:38:29.555642   47063 machine.go:88] provisioning docker machine ...
	I0115 10:38:29.555672   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:29.555875   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556053   47063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-709012"
	I0115 10:38:29.556076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556217   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.558493   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.558804   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.558835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.559018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.559209   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559363   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.559677   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.560009   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.560028   47063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-709012 && echo "default-k8s-diff-port-709012" | sudo tee /etc/hostname
	I0115 10:38:29.706028   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-709012
	
	I0115 10:38:29.706059   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.708893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.709343   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709409   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.709631   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709789   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709938   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.710121   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.710473   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.710501   47063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-709012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-709012/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-709012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:29.845884   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:29.845916   47063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:29.845938   47063 buildroot.go:174] setting up certificates
	I0115 10:38:29.845953   47063 provision.go:83] configureAuth start
	I0115 10:38:29.845973   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.846293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.849072   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.849558   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849755   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.852196   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852548   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.852574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852664   47063 provision.go:138] copyHostCerts
	I0115 10:38:29.852716   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:29.852726   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:29.852778   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:29.852870   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:29.852877   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:29.852896   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:29.852957   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:29.852964   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:29.852981   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:29.853031   47063 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-709012 san=[192.168.39.125 192.168.39.125 localhost 127.0.0.1 minikube default-k8s-diff-port-709012]
	I0115 10:38:30.777181   46388 start.go:369] acquired machines lock for "no-preload-824502" in 58.676870352s
	I0115 10:38:30.777252   46388 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:30.777263   46388 fix.go:54] fixHost starting: 
	I0115 10:38:30.777697   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:30.777733   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:30.795556   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0115 10:38:30.795931   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:30.796387   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:38:30.796417   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:30.796825   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:30.797001   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:30.797164   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:38:30.798953   46388 fix.go:102] recreateIfNeeded on no-preload-824502: state=Stopped err=<nil>
	I0115 10:38:30.798978   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	W0115 10:38:30.799146   46388 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:30.800981   46388 out.go:177] * Restarting existing kvm2 VM for "no-preload-824502" ...
	I0115 10:38:27.195033   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.195128   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.205968   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:27.695992   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.696075   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.707112   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.195726   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:28.195798   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:28.206794   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.206837   46584 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:28.206846   46584 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:28.206858   46584 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:28.206917   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:28.256399   46584 cri.go:89] found id: ""
	I0115 10:38:28.256468   46584 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:28.272234   46584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:28.281359   46584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:28.281439   46584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290385   46584 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290431   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:28.417681   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.012673   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.212322   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.296161   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.378870   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:29.378965   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.879587   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.379077   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.879281   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:31.379626   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.951966   47063 provision.go:172] copyRemoteCerts
	I0115 10:38:29.952019   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:29.952040   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.954784   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955082   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.955104   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955285   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.955466   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.955649   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.955793   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.057077   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:30.081541   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0115 10:38:30.109962   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:30.140809   47063 provision.go:86] duration metric: configureAuth took 294.836045ms
	I0115 10:38:30.140840   47063 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:30.141071   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:30.141167   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.144633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.144975   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.145015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.145177   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.145378   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145539   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145703   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.145927   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.146287   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.146310   47063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:30.484993   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:30.485022   47063 machine.go:91] provisioned docker machine in 929.358403ms
	I0115 10:38:30.485035   47063 start.go:300] post-start starting for "default-k8s-diff-port-709012" (driver="kvm2")
	I0115 10:38:30.485049   47063 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:30.485067   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.485390   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:30.485431   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.488115   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488473   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.488512   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.488837   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.489018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.489171   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.590174   47063 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:30.594879   47063 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:30.594907   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:30.594974   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:30.595069   47063 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:30.595183   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:30.604525   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:30.631240   47063 start.go:303] post-start completed in 146.190685ms
	I0115 10:38:30.631270   47063 fix.go:56] fixHost completed within 20.431996373s
	I0115 10:38:30.631293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.634188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634544   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.634577   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634807   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.635014   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635185   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.635574   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.636012   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.636032   47063 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:30.777043   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315110.724251584
	
	I0115 10:38:30.777069   47063 fix.go:206] guest clock: 1705315110.724251584
	I0115 10:38:30.777079   47063 fix.go:219] Guest: 2024-01-15 10:38:30.724251584 +0000 UTC Remote: 2024-01-15 10:38:30.631274763 +0000 UTC m=+210.817197544 (delta=92.976821ms)
	I0115 10:38:30.777107   47063 fix.go:190] guest clock delta is within tolerance: 92.976821ms
	I0115 10:38:30.777114   47063 start.go:83] releasing machines lock for "default-k8s-diff-port-709012", held for 20.577876265s
	I0115 10:38:30.777143   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.777406   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:30.780611   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781041   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.781076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781250   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.781876   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782186   47063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:30.782240   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.782295   47063 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:30.782321   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.785597   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786228   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.786255   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786386   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786698   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.786881   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.787023   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.787078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.787095   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.787204   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.787774   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.787930   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.788121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.788345   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.919659   47063 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:30.926237   47063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:31.076313   47063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:31.085010   47063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:31.085087   47063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:31.104237   47063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:31.104265   47063 start.go:475] detecting cgroup driver to use...
	I0115 10:38:31.104331   47063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:31.124044   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:31.139494   47063 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:31.139581   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:31.154894   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:31.172458   47063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:31.307400   47063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:31.496675   47063 docker.go:233] disabling docker service ...
	I0115 10:38:31.496733   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:31.513632   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:31.526228   47063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:31.681556   47063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:31.816489   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:31.831193   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:31.853530   47063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:31.853602   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.864559   47063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:31.864661   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.875384   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.888460   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.904536   47063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:31.915622   47063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:31.929209   47063 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:31.929266   47063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:31.948691   47063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:31.959872   47063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:32.102988   47063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:32.300557   47063 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:32.300632   47063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:32.305636   47063 start.go:543] Will wait 60s for crictl version
	I0115 10:38:32.305691   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:38:32.309883   47063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:32.354459   47063 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:32.354594   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.402443   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.463150   47063 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:30.802324   46388 main.go:141] libmachine: (no-preload-824502) Calling .Start
	I0115 10:38:30.802525   46388 main.go:141] libmachine: (no-preload-824502) Ensuring networks are active...
	I0115 10:38:30.803127   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network default is active
	I0115 10:38:30.803476   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network mk-no-preload-824502 is active
	I0115 10:38:30.803799   46388 main.go:141] libmachine: (no-preload-824502) Getting domain xml...
	I0115 10:38:30.804452   46388 main.go:141] libmachine: (no-preload-824502) Creating domain...
	I0115 10:38:32.173614   46388 main.go:141] libmachine: (no-preload-824502) Waiting to get IP...
	I0115 10:38:32.174650   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.175113   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.175211   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.175106   47808 retry.go:31] will retry after 275.127374ms: waiting for machine to come up
	I0115 10:38:32.451595   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.452150   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.452183   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.452095   47808 retry.go:31] will retry after 258.80121ms: waiting for machine to come up
	I0115 10:38:32.712701   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.713348   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.713531   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.713459   47808 retry.go:31] will retry after 440.227123ms: waiting for machine to come up
	I0115 10:38:33.155845   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.156595   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.156625   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.156500   47808 retry.go:31] will retry after 428.795384ms: waiting for machine to come up
	I0115 10:38:33.587781   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.588169   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.588190   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.588118   47808 retry.go:31] will retry after 720.536787ms: waiting for machine to come up
	I0115 10:38:34.310098   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:34.310640   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:34.310674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:34.310604   47808 retry.go:31] will retry after 841.490959ms: waiting for machine to come up
	I0115 10:38:30.274782   46387 retry.go:31] will retry after 7.853808987s: kubelet not initialised
	I0115 10:38:32.464592   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:32.467583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.467962   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:32.467993   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.468218   47063 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:32.472463   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:32.488399   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:32.488488   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:32.535645   47063 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:32.535776   47063 ssh_runner.go:195] Run: which lz4
	I0115 10:38:32.541468   47063 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:32.547264   47063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:32.547297   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:34.427435   47063 crio.go:444] Took 1.886019 seconds to copy over tarball
	I0115 10:38:34.427510   47063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:31.879639   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.379656   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.408694   46584 api_server.go:72] duration metric: took 3.029823539s to wait for apiserver process to appear ...
	I0115 10:38:32.408737   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:32.408760   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.614020   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:36.614053   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:36.614068   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.687561   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.687606   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.134400   46387 retry.go:31] will retry after 7.988567077s: kubelet not initialised
	I0115 10:38:35.154196   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:35.154644   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:35.154674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:35.154615   47808 retry.go:31] will retry after 1.099346274s: waiting for machine to come up
	I0115 10:38:36.255575   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:36.256111   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:36.256151   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:36.256038   47808 retry.go:31] will retry after 1.294045748s: waiting for machine to come up
	I0115 10:38:37.551734   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:37.552569   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:37.552593   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:37.552527   47808 retry.go:31] will retry after 1.720800907s: waiting for machine to come up
	I0115 10:38:39.275250   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:39.275651   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:39.275684   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:39.275595   47808 retry.go:31] will retry after 1.914509744s: waiting for machine to come up
	I0115 10:38:37.765711   47063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.338169875s)
	I0115 10:38:37.765741   47063 crio.go:451] Took 3.338279 seconds to extract the tarball
	I0115 10:38:37.765753   47063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:37.807016   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:37.858151   47063 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:37.858195   47063 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:37.858295   47063 ssh_runner.go:195] Run: crio config
	I0115 10:38:37.933830   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:37.933851   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:37.933872   47063 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:37.933896   47063 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-709012 NodeName:default-k8s-diff-port-709012 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:37.934040   47063 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-709012"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:37.934132   47063 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-709012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0115 10:38:37.934202   47063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:37.945646   47063 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:37.945728   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:37.957049   47063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0115 10:38:37.978770   47063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:37.995277   47063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0115 10:38:38.012964   47063 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:38.016803   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:38.028708   47063 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012 for IP: 192.168.39.125
	I0115 10:38:38.028740   47063 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:38.028887   47063 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:38.028926   47063 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:38.028988   47063 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/client.key
	I0115 10:38:38.048801   47063 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key.657bd91f
	I0115 10:38:38.048895   47063 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key
	I0115 10:38:38.049019   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:38.049058   47063 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:38.049075   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:38.049110   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:38.049149   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:38.049183   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:38.049241   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:38.049848   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:38.078730   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:38.102069   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:38.124278   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:38.150354   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:38.173703   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:38.201758   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:38.227016   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:38.249876   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:38.271859   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:38.294051   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:38.316673   47063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:38.335128   47063 ssh_runner.go:195] Run: openssl version
	I0115 10:38:38.342574   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:38.355889   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361805   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361871   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.369192   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:38.381493   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:38.391714   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396728   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396787   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.402624   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:38.413957   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:38.425258   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430627   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430697   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.440362   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:38.453323   47063 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:38.458803   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:38.465301   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:38.471897   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:38.478274   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:38.484890   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:38.490909   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:38.496868   47063 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:38.496966   47063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:38.497015   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:38.539389   47063 cri.go:89] found id: ""
	I0115 10:38:38.539475   47063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:38.550998   47063 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:38.551020   47063 kubeadm.go:636] restartCluster start
	I0115 10:38:38.551076   47063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:38.561885   47063 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:38.563439   47063 kubeconfig.go:92] found "default-k8s-diff-port-709012" server: "https://192.168.39.125:8444"
	I0115 10:38:38.566482   47063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:38.576458   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:38.576521   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:38.588702   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.077323   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.077407   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.089885   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.577363   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.577441   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.591111   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:36.909069   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.917556   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.917594   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.409134   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.417305   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.417348   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.909251   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.916788   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.916824   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.409535   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:38.416538   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:38.416572   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.908929   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.863238   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.863279   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.863294   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.869897   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.869922   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.909113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.065422   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:40.065467   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:40.408921   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.414320   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:38:40.424348   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:40.424378   46584 api_server.go:131] duration metric: took 8.015632919s to wait for apiserver health ...
	I0115 10:38:40.424390   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:40.424398   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:40.426615   46584 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:40.427979   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:40.450675   46584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:40.478174   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:40.492540   46584 system_pods.go:59] 9 kube-system pods found
	I0115 10:38:40.492582   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492593   46584 system_pods.go:61] "coredns-5dd5756b68-w4p2z" [87d362df-5c29-4a04-b44f-c502cf6849bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492609   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:40.492619   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:40.492633   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:40.492646   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:40.492658   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:40.492671   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:40.492687   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:40.492700   46584 system_pods.go:74] duration metric: took 14.502202ms to wait for pod list to return data ...
	I0115 10:38:40.492715   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:40.496471   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:40.496504   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:40.496517   46584 node_conditions.go:105] duration metric: took 3.794528ms to run NodePressure ...
	I0115 10:38:40.496538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:40.770732   46584 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777051   46584 kubeadm.go:787] kubelet initialised
	I0115 10:38:40.777118   46584 kubeadm.go:788] duration metric: took 6.307286ms waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777139   46584 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:40.784605   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.798293   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798365   46584 pod_ready.go:81] duration metric: took 13.654765ms waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.798389   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798402   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.807236   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807276   46584 pod_ready.go:81] duration metric: took 8.862426ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.807289   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807297   46584 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.813904   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813932   46584 pod_ready.go:81] duration metric: took 6.62492ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.813944   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813951   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.882407   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882458   46584 pod_ready.go:81] duration metric: took 68.496269ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.882472   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882485   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.282123   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282158   46584 pod_ready.go:81] duration metric: took 399.656962ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.282172   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282181   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.683979   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684007   46584 pod_ready.go:81] duration metric: took 401.816493ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.684017   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684023   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.082465   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082490   46584 pod_ready.go:81] duration metric: took 398.460424ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.082501   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082509   46584 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.484454   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484490   46584 pod_ready.go:81] duration metric: took 401.970108ms waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.484504   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484513   46584 pod_ready.go:38] duration metric: took 1.707353329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:42.484534   46584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:42.499693   46584 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:42.499715   46584 kubeadm.go:640] restartCluster took 24.327423485s
	I0115 10:38:42.499733   46584 kubeadm.go:406] StartCluster complete in 24.381392225s
	I0115 10:38:42.499752   46584 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.499817   46584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:42.502965   46584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.503219   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:42.503253   46584 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:42.503356   46584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-781270"
	I0115 10:38:42.503374   46584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-781270"
	I0115 10:38:42.503383   46584 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-781270"
	I0115 10:38:42.503395   46584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-781270"
	W0115 10:38:42.503402   46584 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:42.503451   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:42.503493   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503504   46584 addons.go:69] Setting metrics-server=true in profile "embed-certs-781270"
	I0115 10:38:42.503520   46584 addons.go:234] Setting addon metrics-server=true in "embed-certs-781270"
	W0115 10:38:42.503533   46584 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:42.503577   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503826   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503850   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503855   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503871   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503895   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503924   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.522809   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0115 10:38:42.523025   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0115 10:38:42.523163   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0115 10:38:42.523260   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523382   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523755   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523861   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.523990   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524323   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524345   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524415   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524585   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524605   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524825   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524992   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525017   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525375   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525412   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525571   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.525747   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.528762   46584 addons.go:234] Setting addon default-storageclass=true in "embed-certs-781270"
	W0115 10:38:42.528781   46584 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:42.528807   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.529117   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.529140   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.544693   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I0115 10:38:42.545013   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0115 10:38:42.545528   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.545628   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.546235   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546265   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546268   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546280   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546650   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546687   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546820   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.546918   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I0115 10:38:42.547068   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.547459   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.548255   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.548269   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.548859   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.549002   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.549393   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.549415   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.549597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.551555   46584 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:42.552918   46584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:42.554551   46584 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.554573   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:42.554591   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.554552   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:42.554648   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:42.554662   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.561284   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.561706   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.561854   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.562023   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.562123   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.562179   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.562229   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.564058   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564432   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.564529   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564798   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.564977   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.565148   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.565282   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.570688   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0115 10:38:42.571242   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.571724   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.571749   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.571989   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.572135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.573685   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.573936   46584 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.573952   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:42.573969   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.576948   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577272   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.577312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577680   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.577866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.577988   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.578134   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.687267   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:42.687293   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:42.707058   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:42.707083   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:42.727026   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.745278   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.777425   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:42.777450   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:42.779528   46584 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:42.832109   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:43.011971   46584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-781270" context rescaled to 1 replicas
	I0115 10:38:43.012022   46584 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:43.014704   46584 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:43.016005   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:44.039814   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.294486297s)
	I0115 10:38:44.039891   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312831152s)
	I0115 10:38:44.039895   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039928   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039946   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040024   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040264   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040283   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040293   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040302   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040412   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040427   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040451   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040613   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040744   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040750   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040755   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040791   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040800   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054113   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.054134   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.054409   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.054454   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054469   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.151470   46584 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.135429651s)
	I0115 10:38:44.151517   46584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:44.151560   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319411531s)
	I0115 10:38:44.151601   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.151626   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.151954   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.151973   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152001   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.152012   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.152312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.152317   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.152328   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152338   46584 addons.go:470] Verifying addon metrics-server=true in "embed-certs-781270"
	I0115 10:38:44.155687   46584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:41.191855   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:41.192271   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:41.192310   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:41.192239   47808 retry.go:31] will retry after 2.364591434s: waiting for machine to come up
	I0115 10:38:43.560150   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:43.560624   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:43.560648   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:43.560581   47808 retry.go:31] will retry after 3.204170036s: waiting for machine to come up
	I0115 10:38:40.076788   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.076875   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.089217   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:40.577351   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.577448   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.593294   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.076625   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.076730   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.092700   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.577413   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.577513   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.592266   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.076755   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.076862   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.090411   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.576920   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.576982   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.589590   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.077312   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.077410   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.089732   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.576781   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.576857   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.592414   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.076854   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.076922   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.089009   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.576614   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.576713   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.592137   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.157450   46584 addons.go:505] enable addons completed in 1.654202196s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:38:46.156830   46584 node_ready.go:58] node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:46.129496   46387 retry.go:31] will retry after 7.881779007s: kubelet not initialised
	I0115 10:38:46.766674   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:46.767050   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:46.767072   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:46.767013   47808 retry.go:31] will retry after 3.09324278s: waiting for machine to come up
	I0115 10:38:45.076819   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.076882   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.092624   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:45.576654   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.576724   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.590306   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.076821   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.076920   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.090883   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.577506   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.577590   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.590379   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.076909   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.076997   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.088742   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.577287   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.577371   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.589014   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.076538   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.076608   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.088956   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.576474   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.576573   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.588122   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.588146   47063 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:48.588153   47063 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:48.588162   47063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:48.588214   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:48.631367   47063 cri.go:89] found id: ""
	I0115 10:38:48.631441   47063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:48.648653   47063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:48.657948   47063 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:48.658017   47063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668103   47063 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668124   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:48.787890   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.559039   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.767002   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.842165   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:47.155176   46584 node_ready.go:49] node "embed-certs-781270" has status "Ready":"True"
	I0115 10:38:47.155200   46584 node_ready.go:38] duration metric: took 3.003671558s waiting for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:47.155212   46584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:47.162248   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:49.169885   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:51.190513   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:49.864075   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864515   46388 main.go:141] libmachine: (no-preload-824502) Found IP for machine: 192.168.50.136
	I0115 10:38:49.864538   46388 main.go:141] libmachine: (no-preload-824502) Reserving static IP address...
	I0115 10:38:49.864554   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has current primary IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864990   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.865034   46388 main.go:141] libmachine: (no-preload-824502) DBG | skip adding static IP to network mk-no-preload-824502 - found existing host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"}
	I0115 10:38:49.865052   46388 main.go:141] libmachine: (no-preload-824502) Reserved static IP address: 192.168.50.136
	I0115 10:38:49.865073   46388 main.go:141] libmachine: (no-preload-824502) Waiting for SSH to be available...
	I0115 10:38:49.865115   46388 main.go:141] libmachine: (no-preload-824502) DBG | Getting to WaitForSSH function...
	I0115 10:38:49.867410   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867671   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.867708   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867864   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH client type: external
	I0115 10:38:49.867924   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa (-rw-------)
	I0115 10:38:49.867961   46388 main.go:141] libmachine: (no-preload-824502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:49.867983   46388 main.go:141] libmachine: (no-preload-824502) DBG | About to run SSH command:
	I0115 10:38:49.867994   46388 main.go:141] libmachine: (no-preload-824502) DBG | exit 0
	I0115 10:38:49.966638   46388 main.go:141] libmachine: (no-preload-824502) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:49.967072   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetConfigRaw
	I0115 10:38:49.967925   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:49.970409   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.970811   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.970846   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.971099   46388 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/config.json ...
	I0115 10:38:49.971300   46388 machine.go:88] provisioning docker machine ...
	I0115 10:38:49.971327   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:49.971561   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971757   46388 buildroot.go:166] provisioning hostname "no-preload-824502"
	I0115 10:38:49.971783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971970   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:49.974279   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974723   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.974758   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974917   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:49.975088   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975247   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975460   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:49.975640   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:49.976081   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:49.976099   46388 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-824502 && echo "no-preload-824502" | sudo tee /etc/hostname
	I0115 10:38:50.121181   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-824502
	
	I0115 10:38:50.121206   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.123821   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124162   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.124194   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124371   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.124588   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124788   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124940   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.125103   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.125410   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.125429   46388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-824502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-824502/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-824502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:50.259649   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:50.259680   46388 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:50.259710   46388 buildroot.go:174] setting up certificates
	I0115 10:38:50.259724   46388 provision.go:83] configureAuth start
	I0115 10:38:50.259736   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:50.260022   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:50.262296   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262683   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.262704   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262848   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.265340   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265715   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.265743   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265885   46388 provision.go:138] copyHostCerts
	I0115 10:38:50.265942   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:50.265953   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:50.266025   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:50.266128   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:50.266143   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:50.266178   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:50.266258   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:50.266268   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:50.266296   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:50.266359   46388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.no-preload-824502 san=[192.168.50.136 192.168.50.136 localhost 127.0.0.1 minikube no-preload-824502]
	I0115 10:38:50.666513   46388 provision.go:172] copyRemoteCerts
	I0115 10:38:50.666584   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:50.666615   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.669658   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670109   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.670162   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670410   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.670632   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.670812   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.671067   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:50.774944   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:50.799533   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:50.824210   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 10:38:50.849191   46388 provision.go:86] duration metric: configureAuth took 589.452836ms
	I0115 10:38:50.849224   46388 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:50.849455   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:38:50.849560   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.852884   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853291   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.853346   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853508   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.853746   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.853936   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.854105   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.854244   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.854708   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.854735   46388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:51.246971   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:51.246997   46388 machine.go:91] provisioned docker machine in 1.275679147s
	I0115 10:38:51.247026   46388 start.go:300] post-start starting for "no-preload-824502" (driver="kvm2")
	I0115 10:38:51.247043   46388 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:51.247063   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.247450   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:51.247481   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.250477   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250751   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.250783   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250951   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.251085   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.251227   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.251308   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.350552   46388 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:51.355893   46388 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:51.355918   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:51.355994   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:51.356096   46388 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:51.356220   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:51.366598   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:51.393765   46388 start.go:303] post-start completed in 146.702407ms
	I0115 10:38:51.393798   46388 fix.go:56] fixHost completed within 20.616533939s
	I0115 10:38:51.393826   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.396990   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397531   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.397563   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397785   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.398006   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398190   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398367   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.398602   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:51.399038   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:51.399057   46388 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:51.532940   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315131.477577825
	
	I0115 10:38:51.532962   46388 fix.go:206] guest clock: 1705315131.477577825
	I0115 10:38:51.532971   46388 fix.go:219] Guest: 2024-01-15 10:38:51.477577825 +0000 UTC Remote: 2024-01-15 10:38:51.393803771 +0000 UTC m=+361.851018624 (delta=83.774054ms)
	I0115 10:38:51.533006   46388 fix.go:190] guest clock delta is within tolerance: 83.774054ms
	I0115 10:38:51.533011   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 20.755785276s
	I0115 10:38:51.533031   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.533296   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:51.536532   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537167   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.537206   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537411   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538058   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538236   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538395   46388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:51.538461   46388 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:51.538485   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.538492   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.541387   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541419   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541791   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541836   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541878   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541952   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.541959   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.542137   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542219   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.542317   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542396   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542477   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.542535   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542697   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.668594   46388 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:51.675328   46388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:51.822660   46388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:51.830242   46388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:51.830318   46388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:51.846032   46388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:51.846067   46388 start.go:475] detecting cgroup driver to use...
	I0115 10:38:51.846147   46388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:51.863608   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:51.875742   46388 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:51.875810   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:51.888307   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:51.902327   46388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:52.027186   46388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:52.170290   46388 docker.go:233] disabling docker service ...
	I0115 10:38:52.170372   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:52.184106   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:52.195719   46388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:52.304630   46388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:52.420312   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:52.434213   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:52.453883   46388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:52.453946   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.464662   46388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:52.464726   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.474291   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.483951   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.493132   46388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:52.503668   46388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:52.512336   46388 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:52.512410   46388 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:52.529602   46388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:52.541735   46388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:52.664696   46388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:52.844980   46388 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:52.845051   46388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:52.850380   46388 start.go:543] Will wait 60s for crictl version
	I0115 10:38:52.850463   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:52.854500   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:52.890488   46388 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:52.890595   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:52.944999   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:53.005494   46388 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0115 10:38:54.017897   46387 retry.go:31] will retry after 11.956919729s: kubelet not initialised
	I0115 10:38:53.006783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:53.009509   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.009903   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:53.009934   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.010135   46388 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:53.014612   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:53.029014   46388 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 10:38:53.029063   46388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:53.073803   46388 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0115 10:38:53.073839   46388 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:38:53.073906   46388 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.073943   46388 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.073979   46388 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.073945   46388 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.073914   46388 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.073932   46388 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.073931   46388 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.073918   46388 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075357   46388 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.075478   46388 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.075515   46388 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.075532   46388 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.075482   46388 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.075483   46388 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.234170   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.248000   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0115 10:38:53.264387   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.289576   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.303961   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.326078   46388 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0115 10:38:53.326132   46388 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.326176   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.331268   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.334628   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.366099   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.426012   46388 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0115 10:38:53.426058   46388 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.426106   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.426316   46388 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0115 10:38:53.426346   46388 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.426377   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505102   46388 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0115 10:38:53.505194   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.505201   46388 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.505286   46388 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0115 10:38:53.505358   46388 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.505410   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505334   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.507596   46388 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0115 10:38:53.507630   46388 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.507674   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.544052   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.544142   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.544078   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.544263   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.544458   46388 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0115 10:38:53.544505   46388 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.544550   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.568682   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0115 10:38:53.568786   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.568808   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.681576   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681671   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0115 10:38:53.681777   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:53.681779   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681918   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.681990   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.682040   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0115 10:38:53.682108   46388 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681996   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.682157   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681927   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0115 10:38:53.682277   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:53.728102   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:53.728204   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:49.944443   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:49.944529   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.445085   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.945395   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.444784   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.944622   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.444886   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.460961   47063 api_server.go:72] duration metric: took 2.516519088s to wait for apiserver process to appear ...
	I0115 10:38:52.460980   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:52.460996   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:52.461498   47063 api_server.go:269] stopped: https://192.168.39.125:8444/healthz: Get "https://192.168.39.125:8444/healthz": dial tcp 192.168.39.125:8444: connect: connection refused
	I0115 10:38:52.961968   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:53.672555   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:55.685156   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:56.172493   46584 pod_ready.go:92] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.172521   46584 pod_ready.go:81] duration metric: took 9.010249071s waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.172534   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.178057   46584 pod_ready.go:97] error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178080   46584 pod_ready.go:81] duration metric: took 5.538491ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:56.178092   46584 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178100   46584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185048   46584 pod_ready.go:92] pod "etcd-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.185071   46584 pod_ready.go:81] duration metric: took 6.962528ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185082   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190244   46584 pod_ready.go:92] pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.190263   46584 pod_ready.go:81] duration metric: took 5.173778ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190275   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196537   46584 pod_ready.go:92] pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.196555   46584 pod_ready.go:81] duration metric: took 6.272551ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196566   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367735   46584 pod_ready.go:92] pod "kube-proxy-jqgfc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.367766   46584 pod_ready.go:81] duration metric: took 171.191874ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367779   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.209201   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.209232   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.209247   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.283870   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.283914   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.461166   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.476935   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.476968   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:56.961147   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.966917   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.966949   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:57.461505   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:57.467290   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:38:57.482673   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:57.482709   47063 api_server.go:131] duration metric: took 5.021721974s to wait for apiserver health ...
	I0115 10:38:57.482721   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:57.482729   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:57.484809   47063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:57.486522   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:57.503036   47063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:57.523094   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:57.539289   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:57.539332   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:57.539342   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:57.539353   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:57.539361   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:57.539367   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:57.539372   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:57.539378   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:57.539392   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:57.539400   47063 system_pods.go:74] duration metric: took 16.288236ms to wait for pod list to return data ...
	I0115 10:38:57.539415   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:57.547016   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:57.547043   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:57.547053   47063 node_conditions.go:105] duration metric: took 7.632954ms to run NodePressure ...
	I0115 10:38:57.547069   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:57.838097   47063 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847919   47063 kubeadm.go:787] kubelet initialised
	I0115 10:38:57.847945   47063 kubeadm.go:788] duration metric: took 9.818012ms waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847960   47063 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:57.860753   47063 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.866623   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866666   47063 pod_ready.go:81] duration metric: took 5.881593ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.866679   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866687   47063 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.873742   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873772   47063 pod_ready.go:81] duration metric: took 7.070689ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.873787   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873795   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.881283   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881313   47063 pod_ready.go:81] duration metric: took 7.502343ms waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.881328   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881335   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.927473   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927504   47063 pod_ready.go:81] duration metric: took 46.159848ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.927516   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927523   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.329002   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329029   47063 pod_ready.go:81] duration metric: took 401.499694ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.329039   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329046   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.727362   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727394   47063 pod_ready.go:81] duration metric: took 398.336577ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.727411   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727420   47063 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:59.138162   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138195   47063 pod_ready.go:81] duration metric: took 410.766568ms waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:59.138207   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138214   47063 pod_ready.go:38] duration metric: took 1.290244752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:59.138232   47063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:59.173438   47063 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:59.173463   47063 kubeadm.go:640] restartCluster took 20.622435902s
	I0115 10:38:59.173473   47063 kubeadm.go:406] StartCluster complete in 20.676611158s
	I0115 10:38:59.173494   47063 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.173598   47063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:59.176160   47063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.176389   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:59.176558   47063 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:59.176645   47063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176652   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:59.176680   47063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.176696   47063 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:59.176706   47063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176725   47063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-709012"
	I0115 10:38:59.176768   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177130   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177163   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177188   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177220   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177254   47063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.177288   47063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.177305   47063 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:59.177390   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177796   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177849   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.182815   47063 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-709012" context rescaled to 1 replicas
	I0115 10:38:59.182849   47063 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:59.184762   47063 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:59.186249   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:59.196870   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0115 10:38:59.197111   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37331
	I0115 10:38:59.197493   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.197595   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.198074   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198096   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198236   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198264   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198410   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.198620   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.198634   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.199252   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.199278   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.202438   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0115 10:38:59.202957   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.203462   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.203489   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.203829   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.204271   47063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.204295   47063 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:59.204322   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.204406   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204434   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.204728   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204768   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.220973   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0115 10:38:59.221383   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.221873   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.221898   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.222330   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.222537   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.223337   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0115 10:38:59.223746   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35993
	I0115 10:38:59.224454   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.224557   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.227071   47063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:59.225090   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.225234   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.228609   47063 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.228624   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:59.228638   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.228668   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229046   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.229064   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229415   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229515   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229671   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.230070   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.230093   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.232470   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.233532   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.235985   47063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:56.206357   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.524032218s)
	I0115 10:38:56.206399   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0115 10:38:56.206444   46388 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: (2.52429359s)
	I0115 10:38:56.206494   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206580   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.524566038s)
	I0115 10:38:56.206594   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206609   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206684   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.52488513s)
	I0115 10:38:56.206806   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0115 10:38:56.206718   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.524535788s)
	I0115 10:38:56.206824   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0115 10:38:56.206756   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.524930105s)
	I0115 10:38:56.206843   46388 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.206863   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206780   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.478563083s)
	I0115 10:38:56.206890   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206908   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.986404   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0115 10:38:56.986480   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0115 10:38:56.986513   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:56.986555   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:59.063376   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.076785591s)
	I0115 10:38:59.063421   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0115 10:38:59.063449   46388 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.063494   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.234530   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.234543   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.237273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.237334   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:59.237349   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:59.237367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.237458   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.237624   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.237776   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.240471   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242356   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.242442   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.242483   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242538   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.245246   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.245394   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.251844   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I0115 10:38:59.252344   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.252855   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.252876   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.253245   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.253439   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.255055   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.255299   47063 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.255315   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:59.255331   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.258732   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259370   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.259408   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259554   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.259739   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.259915   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.260060   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.380593   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:59.380623   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:59.387602   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.409765   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.434624   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:59.434655   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:59.514744   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:59.514778   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:59.528401   47063 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:59.528428   47063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:38:59.552331   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:00.775084   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.365286728s)
	I0115 10:39:00.775119   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387483878s)
	I0115 10:39:00.775251   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775268   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775195   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775319   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775697   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775737   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775778   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.775791   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.775805   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775818   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.776009   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.776030   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778922   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.778939   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778949   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.778959   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.779172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.780377   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.780395   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.787873   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.787893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.788142   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.788161   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.882725   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330338587s)
	I0115 10:39:00.882775   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.882792   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883118   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883140   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883144   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.883150   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.883166   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883494   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883513   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883523   47063 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-709012"
	I0115 10:39:00.887782   47063 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:56.767524   46584 pod_ready.go:92] pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.767555   46584 pod_ready.go:81] duration metric: took 399.766724ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.767569   46584 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.776515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:00.777313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:03.358192   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.294671295s)
	I0115 10:39:03.358221   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0115 10:39:03.358249   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:03.358296   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:00.889422   47063 addons.go:505] enable addons completed in 1.71286662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:01.533332   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.534081   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.274613   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.277132   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.981700   46387 kubeadm.go:787] kubelet initialised
	I0115 10:39:05.981726   46387 kubeadm.go:788] duration metric: took 49.462651853s waiting for restarted kubelet to initialise ...
	I0115 10:39:05.981737   46387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:05.987142   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993872   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.993896   46387 pod_ready.go:81] duration metric: took 6.725677ms waiting for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993920   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999103   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.999133   46387 pod_ready.go:81] duration metric: took 5.204706ms waiting for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999148   46387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004449   46387 pod_ready.go:92] pod "etcd-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.004472   46387 pod_ready.go:81] duration metric: took 5.315188ms waiting for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004484   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010187   46387 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.010209   46387 pod_ready.go:81] duration metric: took 5.716918ms waiting for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010221   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380715   46387 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.380742   46387 pod_ready.go:81] duration metric: took 370.513306ms waiting for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380756   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780865   46387 pod_ready.go:92] pod "kube-proxy-w9fdn" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.780887   46387 pod_ready.go:81] duration metric: took 400.122851ms waiting for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780899   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179755   46387 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.179785   46387 pod_ready.go:81] duration metric: took 398.879027ms waiting for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179798   46387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.188315   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.429866   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.071542398s)
	I0115 10:39:05.429896   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0115 10:39:05.429927   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:05.429988   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:08.115120   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.685106851s)
	I0115 10:39:08.115147   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0115 10:39:08.115179   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:08.115226   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:05.540836   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:07.032884   47063 node_ready.go:49] node "default-k8s-diff-port-709012" has status "Ready":"True"
	I0115 10:39:07.032914   47063 node_ready.go:38] duration metric: took 7.504464113s waiting for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:39:07.032928   47063 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:07.042672   47063 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048131   47063 pod_ready.go:92] pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.048156   47063 pod_ready.go:81] duration metric: took 5.456337ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048167   47063 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053470   47063 pod_ready.go:92] pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.053492   47063 pod_ready.go:81] duration metric: took 5.316882ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053504   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.061828   47063 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.562201   47063 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.562235   47063 pod_ready.go:81] duration metric: took 2.508719163s waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.562248   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571588   47063 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.571614   47063 pod_ready.go:81] duration metric: took 9.356396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571628   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580269   47063 pod_ready.go:92] pod "kube-proxy-d8lcq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.580291   47063 pod_ready.go:81] duration metric: took 8.654269ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580305   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833621   47063 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.833646   47063 pod_ready.go:81] duration metric: took 253.332081ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833658   47063 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.776707   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.777515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.687740   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.187565   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.092236   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.976986955s)
	I0115 10:39:11.092266   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0115 10:39:11.092290   46388 cache_images.go:123] Successfully loaded all cached images
	I0115 10:39:11.092296   46388 cache_images.go:92] LoadImages completed in 18.018443053s
	I0115 10:39:11.092373   46388 ssh_runner.go:195] Run: crio config
	I0115 10:39:11.155014   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:11.155036   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:11.155056   46388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:39:11.155074   46388 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-824502 NodeName:no-preload-824502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:39:11.155203   46388 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-824502"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:39:11.155265   46388 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-824502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:39:11.155316   46388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0115 10:39:11.165512   46388 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:39:11.165586   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:39:11.175288   46388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0115 10:39:11.192730   46388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0115 10:39:11.209483   46388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0115 10:39:11.228296   46388 ssh_runner.go:195] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0115 10:39:11.232471   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:39:11.245041   46388 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502 for IP: 192.168.50.136
	I0115 10:39:11.245106   46388 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:11.245298   46388 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:39:11.245364   46388 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:39:11.245456   46388 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.key
	I0115 10:39:11.245551   46388 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key.cb5546de
	I0115 10:39:11.245617   46388 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key
	I0115 10:39:11.245769   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:39:11.245808   46388 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:39:11.245823   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:39:11.245855   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:39:11.245895   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:39:11.245937   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:39:11.246018   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:39:11.246987   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:39:11.272058   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:39:11.295425   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:39:11.320271   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:39:11.347161   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:39:11.372529   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:39:11.396765   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:39:11.419507   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:39:11.441814   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:39:11.463306   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:39:11.485830   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:39:11.510306   46388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:39:11.527095   46388 ssh_runner.go:195] Run: openssl version
	I0115 10:39:11.532483   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:39:11.543447   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548266   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548330   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.554228   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:39:11.564891   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:39:11.574809   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579217   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579257   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.584745   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:39:11.596117   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:39:11.606888   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611567   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611632   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.617307   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:39:11.627893   46388 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:39:11.632530   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:39:11.638562   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:39:11.644605   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:39:11.650917   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:39:11.656970   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:39:11.662948   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:39:11.669010   46388 kubeadm.go:404] StartCluster: {Name:no-preload-824502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:39:11.669093   46388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:39:11.669144   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:11.707521   46388 cri.go:89] found id: ""
	I0115 10:39:11.707594   46388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:39:11.719407   46388 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:39:11.719445   46388 kubeadm.go:636] restartCluster start
	I0115 10:39:11.719511   46388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:39:11.729609   46388 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.730839   46388 kubeconfig.go:92] found "no-preload-824502" server: "https://192.168.50.136:8443"
	I0115 10:39:11.733782   46388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:39:11.744363   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:11.744437   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:11.757697   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.245289   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.245389   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.258680   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.745234   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.745334   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.757934   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.244459   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.244549   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.256860   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.745400   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.745486   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.759185   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:14.244696   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.244774   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.257692   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.842044   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.339850   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.779637   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.278260   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.187668   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.187834   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.745104   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.745191   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.757723   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.244680   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.244760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.259042   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.744599   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.744692   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.761497   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.245412   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.245507   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.260040   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.744664   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.744752   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.757209   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.244739   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.244826   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.257922   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.744411   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.744528   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.756304   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.244475   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.244580   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.257372   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.744977   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.745072   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.758201   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:19.244832   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.244906   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.257468   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.342438   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.845282   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.776399   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.276057   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:20.686392   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:22.687613   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.745014   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.745076   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.757274   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.245246   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.245307   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.257735   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.745333   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.745422   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.757945   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.245022   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.245112   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.257351   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.744980   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.745057   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.756073   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.756099   46388 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:39:21.756107   46388 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:39:21.756116   46388 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:39:21.756167   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:21.800172   46388 cri.go:89] found id: ""
	I0115 10:39:21.800229   46388 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:39:21.815607   46388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:39:21.826460   46388 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:39:21.826525   46388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835735   46388 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835758   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:21.963603   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.673572   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.882139   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.975846   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:23.061284   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:39:23.061391   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:23.561760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.061736   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.562127   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:21.340520   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.340897   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:21.776123   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.776196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.777003   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:24.688163   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.187371   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.061818   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.561582   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.584837   46388 api_server.go:72] duration metric: took 2.523550669s to wait for apiserver process to appear ...
	I0115 10:39:25.584868   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:39:25.584893   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.585385   46388 api_server.go:269] stopped: https://192.168.50.136:8443/healthz: Get "https://192.168.50.136:8443/healthz": dial tcp 192.168.50.136:8443: connect: connection refused
	I0115 10:39:26.085248   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.546970   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.547007   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.547026   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.597433   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.597466   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.597482   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.342652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.343320   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.840652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.625537   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:29.625587   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.085614   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.091715   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.091745   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.585298   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.591889   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.591919   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:31.086028   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:31.091297   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:39:31.099702   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:39:31.099726   46388 api_server.go:131] duration metric: took 5.514851771s to wait for apiserver health ...
	I0115 10:39:31.099735   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:31.099741   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:31.102193   46388 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:39:28.275539   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:30.276634   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.104002   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:39:31.130562   46388 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:39:31.163222   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:39:31.186170   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:39:31.186201   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:39:31.186212   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:39:31.186222   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:39:31.186231   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:39:31.186242   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:39:31.186252   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:39:31.186263   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:39:31.186276   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:39:31.186284   46388 system_pods.go:74] duration metric: took 23.040188ms to wait for pod list to return data ...
	I0115 10:39:31.186292   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:39:31.215529   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:39:31.215567   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:39:31.215584   46388 node_conditions.go:105] duration metric: took 29.283674ms to run NodePressure ...
	I0115 10:39:31.215615   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:31.584238   46388 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590655   46388 kubeadm.go:787] kubelet initialised
	I0115 10:39:31.590679   46388 kubeadm.go:788] duration metric: took 6.418412ms waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590688   46388 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:31.603892   46388 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.612449   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612484   46388 pod_ready.go:81] duration metric: took 8.567896ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.612497   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612507   46388 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.622651   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622678   46388 pod_ready.go:81] duration metric: took 10.161967ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.622690   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622698   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.633893   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633917   46388 pod_ready.go:81] duration metric: took 11.210807ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.633929   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633937   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.639395   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639423   46388 pod_ready.go:81] duration metric: took 5.474128ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.639434   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639442   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.989202   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989242   46388 pod_ready.go:81] duration metric: took 349.786667ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.989255   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989264   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.387200   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387227   46388 pod_ready.go:81] duration metric: took 397.955919ms waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.387236   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387243   46388 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.789213   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789235   46388 pod_ready.go:81] duration metric: took 401.985079ms waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.789245   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789252   46388 pod_ready.go:38] duration metric: took 1.198551697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:32.789271   46388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:39:32.802883   46388 ops.go:34] apiserver oom_adj: -16
	I0115 10:39:32.802901   46388 kubeadm.go:640] restartCluster took 21.083448836s
	I0115 10:39:32.802908   46388 kubeadm.go:406] StartCluster complete in 21.133905255s
	I0115 10:39:32.802921   46388 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.802997   46388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:39:32.804628   46388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.804880   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:39:32.804990   46388 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:39:32.805075   46388 addons.go:69] Setting storage-provisioner=true in profile "no-preload-824502"
	I0115 10:39:32.805094   46388 addons.go:234] Setting addon storage-provisioner=true in "no-preload-824502"
	W0115 10:39:32.805102   46388 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:39:32.805108   46388 addons.go:69] Setting default-storageclass=true in profile "no-preload-824502"
	I0115 10:39:32.805128   46388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-824502"
	I0115 10:39:32.805128   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:39:32.805137   46388 addons.go:69] Setting metrics-server=true in profile "no-preload-824502"
	I0115 10:39:32.805165   46388 addons.go:234] Setting addon metrics-server=true in "no-preload-824502"
	I0115 10:39:32.805172   46388 host.go:66] Checking if "no-preload-824502" exists ...
	W0115 10:39:32.805175   46388 addons.go:243] addon metrics-server should already be in state true
	I0115 10:39:32.805219   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.805564   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805565   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805597   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805602   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805616   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805698   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.809596   46388 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-824502" context rescaled to 1 replicas
	I0115 10:39:32.809632   46388 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:39:32.812135   46388 out.go:177] * Verifying Kubernetes components...
	I0115 10:39:32.814392   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:39:32.823244   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	I0115 10:39:32.823758   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.823864   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0115 10:39:32.824287   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824306   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.824351   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.824693   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.824816   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.824833   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824857   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.825184   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.825778   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.825823   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.827847   46388 addons.go:234] Setting addon default-storageclass=true in "no-preload-824502"
	W0115 10:39:32.827864   46388 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:39:32.827883   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.828242   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.828286   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.838537   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0115 10:39:32.839162   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.839727   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.839747   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.841293   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.841862   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.841899   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.844309   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0115 10:39:32.844407   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32997
	I0115 10:39:32.844654   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.844941   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.845132   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845156   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.845712   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.845881   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845894   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.846316   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.846347   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.846910   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.847189   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.849126   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.851699   46388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:39:32.853268   46388 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:32.853284   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:39:32.853305   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.855997   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856372   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.856394   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856569   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.856716   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.856853   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.856975   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.861396   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I0115 10:39:32.861893   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.862379   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.862409   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.862874   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.863050   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.864195   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37983
	I0115 10:39:32.864480   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.866714   46388 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:39:32.864849   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.868242   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:39:32.868256   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:39:32.868274   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.868596   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.868613   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.869057   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.869306   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.870918   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.871163   46388 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:32.871177   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:39:32.871192   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.871252   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871670   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.871691   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871958   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.872127   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.872288   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.872463   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.874381   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875287   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.875314   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875478   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.875624   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.875786   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.875893   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.982357   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:33.059016   46388 node_ready.go:35] waiting up to 6m0s for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:33.059259   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:39:33.059281   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:39:33.060796   46388 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:39:33.060983   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:33.110608   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:39:33.110633   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:39:33.154857   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:33.154886   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:39:33.198495   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:34.178167   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.117123302s)
	I0115 10:39:34.178220   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178234   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178312   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.19592253s)
	I0115 10:39:34.178359   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178372   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178649   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178669   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178687   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178712   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178723   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178735   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178691   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178800   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178811   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178823   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178982   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179001   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.179003   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179040   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179057   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179075   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.186855   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.186875   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.187114   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.187135   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.187154   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.293778   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095231157s)
	I0115 10:39:34.293837   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.293861   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294161   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294184   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294194   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.294203   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294451   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294475   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294487   46388 addons.go:470] Verifying addon metrics-server=true in "no-preload-824502"
	I0115 10:39:34.296653   46388 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:39:29.687541   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.689881   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.692248   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.298179   46388 addons.go:505] enable addons completed in 1.493195038s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:31.842086   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.843601   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:32.775651   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.778997   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:36.186700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.688932   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:35.063999   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:37.068802   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:39.564287   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:36.341901   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.344615   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:37.278252   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:39.780035   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:41.186854   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.687410   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:40.063481   46388 node_ready.go:49] node "no-preload-824502" has status "Ready":"True"
	I0115 10:39:40.063509   46388 node_ready.go:38] duration metric: took 7.00445832s waiting for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:40.063521   46388 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:40.069733   46388 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077511   46388 pod_ready.go:92] pod "coredns-76f75df574-ft2wt" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.077539   46388 pod_ready.go:81] duration metric: took 7.783253ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077549   46388 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082665   46388 pod_ready.go:92] pod "etcd-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.082693   46388 pod_ready.go:81] duration metric: took 5.137636ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082704   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087534   46388 pod_ready.go:92] pod "kube-apiserver-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.087552   46388 pod_ready.go:81] duration metric: took 4.840583ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087563   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092447   46388 pod_ready.go:92] pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.092473   46388 pod_ready.go:81] duration metric: took 4.90114ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092493   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464047   46388 pod_ready.go:92] pod "kube-proxy-nlk2h" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.464065   46388 pod_ready.go:81] duration metric: took 371.565815ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464075   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:42.472255   46388 pod_ready.go:102] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.471011   46388 pod_ready.go:92] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:43.471033   46388 pod_ready.go:81] duration metric: took 3.006951578s waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:43.471045   46388 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.841668   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.842151   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.277636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:44.787510   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:46.187891   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:48.687578   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.478255   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.978120   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.340455   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.341486   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.840829   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.275430   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.776946   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.188236   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.686748   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.980682   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:52.479488   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.840971   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.841513   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.778023   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.275602   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:55.687892   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.186665   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.978059   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.978213   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.978881   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.341772   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.841021   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.775700   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:59.274671   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:01.280895   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.186976   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:02.688712   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.978942   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.482480   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.841912   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.340823   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.775015   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.776664   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.185744   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.185877   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:09.187192   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.979141   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.479235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.840997   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.842100   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.278110   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.775278   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:11.686672   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.187037   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.978475   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.978621   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.346343   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.841357   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.841981   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:13.278313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:15.777340   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.188343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.687840   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.979177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.981550   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.478364   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:17.340973   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.341317   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.275525   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:20.277493   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.187342   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.693743   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.480386   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.481947   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.341650   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.841949   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:22.777674   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.273859   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:26.186846   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.188206   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.978266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.979824   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.842629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.341954   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.274109   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:29.275517   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:31.277396   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.688520   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.187343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.478712   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:32.978549   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.843559   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.340435   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.278639   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.777051   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.186611   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:34.978720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:37.488790   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.841994   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.340074   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.278319   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.776206   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:39.978911   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.478331   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.187741   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.687320   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.340766   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.341909   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.843116   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.777726   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.777953   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:45.188685   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.687270   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.978841   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.477932   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.478482   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.340237   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.341936   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.275247   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.777753   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.688548   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.187385   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.188261   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.478562   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.978677   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.840537   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.842188   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.278594   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.774847   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.687614   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:59.186203   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.479325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.979266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.340295   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.342857   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.776968   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.777421   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.278730   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.186645   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.187583   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.478127   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.478816   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:00.841474   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.340255   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.775648   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.779261   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.687557   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.688081   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.979671   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.478240   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.345230   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.841561   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:09.841629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.275641   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.276466   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.187771   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.688852   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.478832   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.978808   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:11.841717   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.341355   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.775133   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.274677   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.186001   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.186387   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.186931   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.979099   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.478539   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:16.841294   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:18.842244   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.776623   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:20.274196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.187095   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.689700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.978471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.478169   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.479319   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.341851   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.343663   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.275134   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.276420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.185307   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.186549   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.978977   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.979239   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:25.840539   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:27.840819   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:29.842580   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.775069   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.775244   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.275239   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:30.187482   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.687454   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.478330   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.479265   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.340974   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.342201   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.275561   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.775652   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.687487   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.689628   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:39.186244   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.979235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.981609   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.342452   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:38.841213   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.775893   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.274573   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.186313   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.687042   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.478993   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.479953   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.341359   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.840325   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.775636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.275821   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.687911   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.186598   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:44.977946   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:46.980471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.477591   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.841849   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.341443   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:47.276441   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.775182   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.687273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.187451   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.480325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.979440   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.841657   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.341257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.776199   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:54.274920   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.188121   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.191970   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.478903   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:58.979288   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.341479   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.841144   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.841215   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.775625   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.276127   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.687860   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:02.188506   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.480582   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:03.977715   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.841608   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.340546   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.775220   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.274050   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.277327   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.688269   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.187187   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:05.977760   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.978356   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.340629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.341333   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.775130   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.776410   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.686836   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.187035   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.187814   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.978478   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.477854   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.477883   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.341625   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.841300   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.842745   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:13.276029   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:15.774949   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.686998   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.689531   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.478177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.978154   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.844053   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:19.339915   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:17.775988   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:20.276213   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.187144   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.188273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.479275   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.977720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.342019   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.343747   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:22.775222   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.274922   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.186701   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.979093   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.478022   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.843596   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.340257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:27.275420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:29.275918   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:31.276702   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.186796   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.686406   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.478933   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.978757   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.341780   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.842117   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:33.774432   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.775822   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:34.687304   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:36.687850   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.187956   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.478261   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.978198   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.341314   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.840626   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.842475   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:38.275042   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:40.774892   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.686479   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.688800   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.980119   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:42.478070   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.478709   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.844661   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.340617   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.278574   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:45.775324   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.185760   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.186399   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.479381   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.979086   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.842369   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:49.341153   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:47.776338   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.275329   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.187219   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.687370   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.479573   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.978568   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.840818   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.842279   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.776812   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:54.780747   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.187111   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:57.187263   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.478479   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.977687   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.846775   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.340913   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.768584   46584 pod_ready.go:81] duration metric: took 4m0.001000825s waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:42:56.768615   46584 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:42:56.768623   46584 pod_ready.go:38] duration metric: took 4m9.613401399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:42:56.768641   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:42:56.768686   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:42:56.768739   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:42:56.842276   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:56.842298   46584 cri.go:89] found id: ""
	I0115 10:42:56.842309   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:42:56.842361   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.847060   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:42:56.847118   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:42:56.887059   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:56.887092   46584 cri.go:89] found id: ""
	I0115 10:42:56.887100   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:42:56.887158   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.893238   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:42:56.893289   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:42:56.933564   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:56.933593   46584 cri.go:89] found id: ""
	I0115 10:42:56.933603   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:42:56.933657   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.937882   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:42:56.937958   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:42:56.980953   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:56.980989   46584 cri.go:89] found id: ""
	I0115 10:42:56.980999   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:42:56.981051   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.985008   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:42:56.985058   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:42:57.026275   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:57.026305   46584 cri.go:89] found id: ""
	I0115 10:42:57.026315   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:42:57.026373   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.030799   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:42:57.030885   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:42:57.071391   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:42:57.071416   46584 cri.go:89] found id: ""
	I0115 10:42:57.071424   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:42:57.071485   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.076203   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:42:57.076254   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:42:57.119035   46584 cri.go:89] found id: ""
	I0115 10:42:57.119062   46584 logs.go:284] 0 containers: []
	W0115 10:42:57.119069   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:42:57.119074   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:42:57.119129   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:42:57.167335   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:57.167355   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:57.167360   46584 cri.go:89] found id: ""
	I0115 10:42:57.167367   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:42:57.167411   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.171919   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.176255   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:42:57.176284   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:42:57.328501   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:42:57.328538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:57.390279   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:42:57.390309   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:57.886607   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:42:57.886645   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:42:57.937391   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:42:57.937420   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:42:58.001313   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:42:58.001348   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:42:58.016772   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:42:58.016804   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:58.060489   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:42:58.060516   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:58.102993   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:42:58.103043   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:58.140732   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:42:58.140764   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:58.191891   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:42:58.191927   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:58.235836   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:42:58.235861   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:58.277424   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:42:58.277465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:00.844771   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:00.862922   46584 api_server.go:72] duration metric: took 4m17.850865s to wait for apiserver process to appear ...
	I0115 10:43:00.862946   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:00.862992   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:00.863055   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:00.909986   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:00.910013   46584 cri.go:89] found id: ""
	I0115 10:43:00.910020   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:00.910066   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.915553   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:00.915634   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:00.969923   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:00.969951   46584 cri.go:89] found id: ""
	I0115 10:43:00.969961   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:00.970021   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.974739   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:00.974805   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:01.024283   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.024305   46584 cri.go:89] found id: ""
	I0115 10:43:01.024314   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:01.024366   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.029325   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:01.029388   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:01.070719   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.070746   46584 cri.go:89] found id: ""
	I0115 10:43:01.070755   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:01.070806   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.074906   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:01.074969   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:01.111715   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.111747   46584 cri.go:89] found id: ""
	I0115 10:43:01.111756   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:01.111805   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.116173   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:01.116225   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:01.157760   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.157791   46584 cri.go:89] found id: ""
	I0115 10:43:01.157802   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:01.157866   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.161944   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:01.162010   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:01.201888   46584 cri.go:89] found id: ""
	I0115 10:43:01.201915   46584 logs.go:284] 0 containers: []
	W0115 10:43:01.201925   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:01.201932   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:01.201990   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:01.244319   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.244346   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.244352   46584 cri.go:89] found id: ""
	I0115 10:43:01.244361   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:01.244454   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.248831   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.253617   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:01.253643   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:01.309426   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:01.309465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.346755   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:01.346789   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.385238   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:01.385266   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.423907   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:01.423941   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.480867   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:01.480902   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:01.538367   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:01.538403   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.580240   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:01.580273   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.622561   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:01.622602   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:01.675436   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:01.675463   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:59.687714   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.186463   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.982902   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:03.478178   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.840619   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.841154   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:04.842905   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.080545   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:02.080578   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:02.144713   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:02.144756   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:02.160120   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:02.160147   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:04.776113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:43:04.782741   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:43:04.783959   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:04.783979   46584 api_server.go:131] duration metric: took 3.92102734s to wait for apiserver health ...
	I0115 10:43:04.783986   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:04.784019   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:04.784071   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:04.832660   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:04.832685   46584 cri.go:89] found id: ""
	I0115 10:43:04.832695   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:04.832750   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.836959   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:04.837009   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:04.878083   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:04.878103   46584 cri.go:89] found id: ""
	I0115 10:43:04.878110   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:04.878160   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.882581   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:04.882642   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:04.927778   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:04.927798   46584 cri.go:89] found id: ""
	I0115 10:43:04.927805   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:04.927848   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.932822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:04.932891   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:04.975930   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:04.975955   46584 cri.go:89] found id: ""
	I0115 10:43:04.975965   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:04.976010   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.980744   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:04.980803   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:05.024300   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.024325   46584 cri.go:89] found id: ""
	I0115 10:43:05.024332   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:05.024383   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.029091   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:05.029159   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:05.081239   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.081264   46584 cri.go:89] found id: ""
	I0115 10:43:05.081273   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:05.081332   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.085822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:05.085879   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:05.126839   46584 cri.go:89] found id: ""
	I0115 10:43:05.126884   46584 logs.go:284] 0 containers: []
	W0115 10:43:05.126896   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:05.126903   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:05.126963   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:05.168241   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.168269   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.168276   46584 cri.go:89] found id: ""
	I0115 10:43:05.168285   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:05.168343   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.173309   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.177144   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:05.177164   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:05.239116   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:05.239148   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:05.368712   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:05.368745   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:05.429504   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:05.429540   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:05.473181   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:05.473216   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.510948   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:05.510974   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.551052   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:05.551082   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.606711   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:05.606746   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:05.661634   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:05.661663   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:05.675627   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:05.675656   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:05.736266   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:05.736305   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.775567   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:05.775597   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:06.111495   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:06.111531   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:08.661238   46584 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:08.661275   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.661282   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.661288   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.661294   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.661300   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.661306   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.661316   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.661324   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.661335   46584 system_pods.go:74] duration metric: took 3.877343546s to wait for pod list to return data ...
	I0115 10:43:08.661342   46584 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:08.664367   46584 default_sa.go:45] found service account: "default"
	I0115 10:43:08.664393   46584 default_sa.go:55] duration metric: took 3.04125ms for default service account to be created ...
	I0115 10:43:08.664408   46584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:08.672827   46584 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:08.672852   46584 system_pods.go:89] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.672860   46584 system_pods.go:89] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.672867   46584 system_pods.go:89] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.672873   46584 system_pods.go:89] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.672879   46584 system_pods.go:89] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.672885   46584 system_pods.go:89] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.672895   46584 system_pods.go:89] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.672906   46584 system_pods.go:89] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.672920   46584 system_pods.go:126] duration metric: took 8.505614ms to wait for k8s-apps to be running ...
	I0115 10:43:08.672933   46584 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:08.672984   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:08.690592   46584 system_svc.go:56] duration metric: took 17.651896ms WaitForService to wait for kubelet.
	I0115 10:43:08.690618   46584 kubeadm.go:581] duration metric: took 4m25.678563679s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:08.690640   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:08.694652   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:08.694679   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:08.694692   46584 node_conditions.go:105] duration metric: took 4.045505ms to run NodePressure ...
	I0115 10:43:08.694705   46584 start.go:228] waiting for startup goroutines ...
	I0115 10:43:08.694713   46584 start.go:233] waiting for cluster config update ...
	I0115 10:43:08.694725   46584 start.go:242] writing updated cluster config ...
	I0115 10:43:08.694991   46584 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:08.747501   46584 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:08.750319   46584 out.go:177] * Done! kubectl is now configured to use "embed-certs-781270" cluster and "default" namespace by default
	I0115 10:43:04.686284   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:06.703127   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.180590   46387 pod_ready.go:81] duration metric: took 4m0.000776944s waiting for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:07.180624   46387 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0115 10:43:07.180644   46387 pod_ready.go:38] duration metric: took 4m1.198895448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:07.180669   46387 kubeadm.go:640] restartCluster took 5m11.875261334s
	W0115 10:43:07.180729   46387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0115 10:43:07.180765   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0115 10:43:05.479764   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.978536   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.343529   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841510   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841533   47063 pod_ready.go:81] duration metric: took 4m0.007868879s waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:09.841542   47063 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:09.841549   47063 pod_ready.go:38] duration metric: took 4m2.808610487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:09.841562   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:09.841584   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:09.841625   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:12.165729   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.984931075s)
	I0115 10:43:12.165790   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:12.178710   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:43:12.188911   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:43:12.199329   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:43:12.199377   46387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0115 10:43:12.411245   46387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 10:43:09.980448   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:12.478625   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:14.479234   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.904898   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:09.904921   47063 cri.go:89] found id: ""
	I0115 10:43:09.904930   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:09.904996   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.911493   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:09.911557   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:09.958040   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:09.958060   47063 cri.go:89] found id: ""
	I0115 10:43:09.958070   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:09.958122   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.962914   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:09.962972   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:10.033848   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:10.033875   47063 cri.go:89] found id: ""
	I0115 10:43:10.033885   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:10.033946   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.043173   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:10.043232   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:10.088380   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:10.088405   47063 cri.go:89] found id: ""
	I0115 10:43:10.088415   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:10.088478   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.094288   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:10.094350   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:10.145428   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:10.145453   47063 cri.go:89] found id: ""
	I0115 10:43:10.145463   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:10.145547   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.150557   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:10.150637   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:10.206875   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:10.206901   47063 cri.go:89] found id: ""
	I0115 10:43:10.206915   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:10.206971   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.211979   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:10.212039   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:10.260892   47063 cri.go:89] found id: ""
	I0115 10:43:10.260914   47063 logs.go:284] 0 containers: []
	W0115 10:43:10.260924   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:10.260936   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:10.260987   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:10.315938   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.315970   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:10.315978   47063 cri.go:89] found id: ""
	I0115 10:43:10.315987   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:10.316045   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.324077   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.332727   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:10.332756   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.376006   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:10.376034   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:10.967301   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:10.967337   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:11.033301   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:11.033327   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:11.091151   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:11.091184   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:11.145411   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:11.145447   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:11.194249   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:11.194274   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:11.373988   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:11.374020   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:11.442754   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:11.442788   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:11.486282   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:11.486315   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:11.547428   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:11.547464   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:11.560977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:11.561005   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:11.603150   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:11.603179   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.149324   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:14.166360   47063 api_server.go:72] duration metric: took 4m14.983478755s to wait for apiserver process to appear ...
	I0115 10:43:14.166391   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:14.166444   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:14.166504   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:14.211924   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:14.211950   47063 cri.go:89] found id: ""
	I0115 10:43:14.211961   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:14.212018   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.216288   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:14.216352   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:14.264237   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:14.264270   47063 cri.go:89] found id: ""
	I0115 10:43:14.264280   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:14.264338   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.268883   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:14.268947   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:14.329606   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:14.329631   47063 cri.go:89] found id: ""
	I0115 10:43:14.329639   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:14.329694   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.334069   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:14.334133   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:14.374753   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.374779   47063 cri.go:89] found id: ""
	I0115 10:43:14.374788   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:14.374842   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.380452   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:14.380529   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:14.422341   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:14.422371   47063 cri.go:89] found id: ""
	I0115 10:43:14.422380   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:14.422444   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.427106   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:14.427169   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:14.469410   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:14.469440   47063 cri.go:89] found id: ""
	I0115 10:43:14.469450   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:14.469511   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.475098   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:14.475216   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:14.533771   47063 cri.go:89] found id: ""
	I0115 10:43:14.533794   47063 logs.go:284] 0 containers: []
	W0115 10:43:14.533800   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:14.533805   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:14.533876   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:14.573458   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:14.573483   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:14.573490   47063 cri.go:89] found id: ""
	I0115 10:43:14.573498   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:14.573561   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.578186   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.583133   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:14.583157   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.631142   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:14.631180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:16.978406   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:18.979879   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:15.076904   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:15.076958   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:15.129739   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:15.129778   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:15.169656   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:15.169685   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:15.229569   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:15.229616   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:15.293037   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:15.293075   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:15.351198   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:15.351243   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:15.394604   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:15.394642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:15.451142   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:15.451180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:15.466108   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:15.466146   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:15.595576   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:15.595615   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:15.643711   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:15.643740   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.200861   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:43:18.207576   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:43:18.208943   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:18.208964   47063 api_server.go:131] duration metric: took 4.042566476s to wait for apiserver health ...
	I0115 10:43:18.208971   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:18.208992   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:18.209037   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:18.254324   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.254353   47063 cri.go:89] found id: ""
	I0115 10:43:18.254361   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:18.254405   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.258765   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:18.258844   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:18.303785   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.303811   47063 cri.go:89] found id: ""
	I0115 10:43:18.303820   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:18.303880   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.308940   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:18.309009   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:18.358850   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:18.358878   47063 cri.go:89] found id: ""
	I0115 10:43:18.358888   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:18.358954   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.363588   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:18.363656   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:18.412797   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.412820   47063 cri.go:89] found id: ""
	I0115 10:43:18.412828   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:18.412878   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.418704   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:18.418765   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:18.460050   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:18.460074   47063 cri.go:89] found id: ""
	I0115 10:43:18.460083   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:18.460138   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.465581   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:18.465642   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:18.516632   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.516656   47063 cri.go:89] found id: ""
	I0115 10:43:18.516665   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:18.516719   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.521873   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:18.521935   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:18.574117   47063 cri.go:89] found id: ""
	I0115 10:43:18.574145   47063 logs.go:284] 0 containers: []
	W0115 10:43:18.574154   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:18.574161   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:18.574222   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:18.630561   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.630593   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:18.630599   47063 cri.go:89] found id: ""
	I0115 10:43:18.630606   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:18.630666   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.636059   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.640707   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:18.640728   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.681635   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:18.681667   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:18.803880   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:18.803913   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.864605   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:18.864642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.918210   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:18.918250   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.960702   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:18.960733   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:19.013206   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:19.013242   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:19.070193   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:19.070230   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:19.087983   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:19.088023   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:19.150096   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:19.150132   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:19.196977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:19.197006   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:19.244166   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:19.244202   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:19.290314   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:19.290349   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:22.182766   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:22.182794   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.182801   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.182808   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.182814   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.182820   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.182826   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.182836   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.182848   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.182858   47063 system_pods.go:74] duration metric: took 3.973880704s to wait for pod list to return data ...
	I0115 10:43:22.182869   47063 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:22.186304   47063 default_sa.go:45] found service account: "default"
	I0115 10:43:22.186344   47063 default_sa.go:55] duration metric: took 3.464907ms for default service account to be created ...
	I0115 10:43:22.186354   47063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:22.192564   47063 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:22.192595   47063 system_pods.go:89] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.192604   47063 system_pods.go:89] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.192611   47063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.192620   47063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.192627   47063 system_pods.go:89] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.192634   47063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.192644   47063 system_pods.go:89] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.192651   47063 system_pods.go:89] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.192661   47063 system_pods.go:126] duration metric: took 6.301001ms to wait for k8s-apps to be running ...
	I0115 10:43:22.192669   47063 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:22.192720   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:22.210150   47063 system_svc.go:56] duration metric: took 17.476738ms WaitForService to wait for kubelet.
	I0115 10:43:22.210169   47063 kubeadm.go:581] duration metric: took 4m23.02729406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:22.210190   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:22.214086   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:22.214111   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:22.214124   47063 node_conditions.go:105] duration metric: took 3.928309ms to run NodePressure ...
	I0115 10:43:22.214137   47063 start.go:228] waiting for startup goroutines ...
	I0115 10:43:22.214146   47063 start.go:233] waiting for cluster config update ...
	I0115 10:43:22.214158   47063 start.go:242] writing updated cluster config ...
	I0115 10:43:22.214394   47063 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:22.264250   47063 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:22.267546   47063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-709012" cluster and "default" namespace by default
	I0115 10:43:20.980266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:23.478672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.109313   46387 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0115 10:43:26.109392   46387 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 10:43:26.109501   46387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 10:43:26.109621   46387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 10:43:26.109750   46387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 10:43:26.109926   46387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 10:43:26.110051   46387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 10:43:26.110114   46387 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0115 10:43:26.110201   46387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 10:43:26.112841   46387 out.go:204]   - Generating certificates and keys ...
	I0115 10:43:26.112937   46387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 10:43:26.113031   46387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 10:43:26.113142   46387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0115 10:43:26.113237   46387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0115 10:43:26.113336   46387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0115 10:43:26.113414   46387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0115 10:43:26.113530   46387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0115 10:43:26.113617   46387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0115 10:43:26.113717   46387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0115 10:43:26.113814   46387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0115 10:43:26.113867   46387 kubeadm.go:322] [certs] Using the existing "sa" key
	I0115 10:43:26.113959   46387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 10:43:26.114029   46387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 10:43:26.114128   46387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 10:43:26.114214   46387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 10:43:26.114289   46387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 10:43:26.114400   46387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 10:43:26.115987   46387 out.go:204]   - Booting up control plane ...
	I0115 10:43:26.116100   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 10:43:26.116240   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 10:43:26.116349   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 10:43:26.116476   46387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 10:43:26.116677   46387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 10:43:26.116792   46387 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.004579 seconds
	I0115 10:43:26.116908   46387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 10:43:26.117097   46387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 10:43:26.117187   46387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 10:43:26.117349   46387 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-206509 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0115 10:43:26.117437   46387 kubeadm.go:322] [bootstrap-token] Using token: zc1jed.g57dxx99f2u8lwfg
	I0115 10:43:26.118960   46387 out.go:204]   - Configuring RBAC rules ...
	I0115 10:43:26.119074   46387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 10:43:26.119258   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 10:43:26.119401   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 10:43:26.119538   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 10:43:26.119657   46387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 10:43:26.119723   46387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 10:43:26.119796   46387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 10:43:26.119809   46387 kubeadm.go:322] 
	I0115 10:43:26.119857   46387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 10:43:26.119863   46387 kubeadm.go:322] 
	I0115 10:43:26.119923   46387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 10:43:26.119930   46387 kubeadm.go:322] 
	I0115 10:43:26.119950   46387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 10:43:26.120002   46387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 10:43:26.120059   46387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 10:43:26.120078   46387 kubeadm.go:322] 
	I0115 10:43:26.120120   46387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 10:43:26.120185   46387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 10:43:26.120249   46387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 10:43:26.120255   46387 kubeadm.go:322] 
	I0115 10:43:26.120359   46387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0115 10:43:26.120426   46387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 10:43:26.120433   46387 kubeadm.go:322] 
	I0115 10:43:26.120512   46387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120660   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 \
	I0115 10:43:26.120687   46387 kubeadm.go:322]     --control-plane 	  
	I0115 10:43:26.120691   46387 kubeadm.go:322] 
	I0115 10:43:26.120757   46387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 10:43:26.120763   46387 kubeadm.go:322] 
	I0115 10:43:26.120831   46387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120969   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 10:43:26.120990   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:43:26.121000   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:43:26.122557   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:43:25.977703   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:27.979775   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.123754   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:43:26.133514   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:43:26.152666   46387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:43:26.152776   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.152794   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=old-k8s-version-206509 minikube.k8s.io/updated_at=2024_01_15T10_43_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.205859   46387 ops.go:34] apiserver oom_adj: -16
	I0115 10:43:26.398371   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.899064   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.398532   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.898380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.398986   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.899140   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.399224   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.898397   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.399321   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.899035   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.398549   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.898547   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.399096   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.898492   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.399077   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.899311   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:34.398839   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.980789   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:31.981727   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.479518   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.398611   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.898531   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.399422   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.898569   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.399432   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.399017   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.898561   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:39.398551   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.977916   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:38.978672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:39.899402   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.398556   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.898384   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:41.035213   46387 kubeadm.go:1088] duration metric: took 14.882479947s to wait for elevateKubeSystemPrivileges.
	I0115 10:43:41.035251   46387 kubeadm.go:406] StartCluster complete in 5m45.791159963s
	I0115 10:43:41.035271   46387 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.035357   46387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:43:41.037947   46387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.038220   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:43:41.038242   46387 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:43:41.038314   46387 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038317   46387 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038333   46387 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-206509"
	I0115 10:43:41.038334   46387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-206509"
	W0115 10:43:41.038341   46387 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:43:41.038389   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038388   46387 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038405   46387 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-206509"
	W0115 10:43:41.038428   46387 addons.go:243] addon metrics-server should already be in state true
	I0115 10:43:41.038446   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:43:41.038467   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038724   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038738   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038783   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038787   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038815   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038909   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.054942   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0115 10:43:41.055314   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.055844   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.055868   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.056312   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.056464   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0115 10:43:41.056853   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.056878   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.056910   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.057198   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0115 10:43:41.057317   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057341   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.057532   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.057682   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.057844   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.057955   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057979   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.058300   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.058921   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.058952   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.061947   46387 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-206509"
	W0115 10:43:41.061973   46387 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:43:41.061999   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.062381   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.062405   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.075135   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33773
	I0115 10:43:41.075593   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.075704   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0115 10:43:41.076514   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.076536   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.076723   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.077196   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.077219   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.077225   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077564   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077607   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.077723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.080161   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.080238   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.082210   46387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:43:41.083883   46387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:43:41.085452   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:43:41.085477   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:43:41.083855   46387 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.085496   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.085496   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:43:41.085511   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.086304   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0115 10:43:41.086675   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.087100   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.087120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.087465   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.087970   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.088011   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.090492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.091743   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092335   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092355   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092675   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092695   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092833   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.092969   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.093129   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.093233   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.094042   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.094209   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.094296   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.094372   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.105226   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0115 10:43:41.105644   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.106092   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.106120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.106545   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.106759   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.108735   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.109022   46387 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.109040   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:43:41.109057   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.112322   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112771   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.112797   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112914   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.113100   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.113279   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.113442   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.353016   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:43:41.353038   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:43:41.357846   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.365469   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.465358   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:43:41.465379   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:43:41.532584   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:41.532612   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:43:41.598528   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 10:43:41.605798   46387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-206509" context rescaled to 1 replicas
	I0115 10:43:41.605838   46387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:43:41.607901   46387 out.go:177] * Verifying Kubernetes components...
	I0115 10:43:41.609363   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:41.608778   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:42.634034   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268517129s)
	I0115 10:43:42.634071   46387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.024689682s)
	I0115 10:43:42.634090   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634095   46387 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.634103   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634046   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.035489058s)
	I0115 10:43:42.634140   46387 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0115 10:43:42.634200   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.276326924s)
	I0115 10:43:42.634228   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634243   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634451   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634495   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634515   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634525   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634534   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634540   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634557   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634570   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634580   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634589   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634896   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634912   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634967   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634997   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.635008   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.656600   46387 node_ready.go:49] node "old-k8s-version-206509" has status "Ready":"True"
	I0115 10:43:42.656629   46387 node_ready.go:38] duration metric: took 22.522223ms waiting for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.656640   46387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:42.714802   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.714834   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.715273   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.715277   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.715303   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.722261   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:42.792908   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183451396s)
	I0115 10:43:42.792964   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.792982   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793316   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793339   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793352   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.793361   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793580   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793625   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793638   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793649   46387 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-206509"
	I0115 10:43:42.796113   46387 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:43:42.798128   46387 addons.go:505] enable addons completed in 1.759885904s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:43:40.979360   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477862   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477895   46388 pod_ready.go:81] duration metric: took 4m0.006840717s waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:43.477906   46388 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:43.477915   46388 pod_ready.go:38] duration metric: took 4m3.414382685s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:43.477933   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:43.477963   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:43.478033   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:43.533796   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:43.533825   46388 cri.go:89] found id: ""
	I0115 10:43:43.533836   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:43.533893   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.540165   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:43.540224   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:43.576831   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:43.576853   46388 cri.go:89] found id: ""
	I0115 10:43:43.576861   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:43.576922   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.581556   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:43.581616   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:43.625292   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.625315   46388 cri.go:89] found id: ""
	I0115 10:43:43.625323   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:43.625371   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.630741   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:43.630803   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:43.682511   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:43.682553   46388 cri.go:89] found id: ""
	I0115 10:43:43.682563   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:43.682621   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.688126   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:43.688194   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:43.739847   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.739866   46388 cri.go:89] found id: ""
	I0115 10:43:43.739873   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:43.739919   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.744569   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:43.744635   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:43.787603   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:43.787627   46388 cri.go:89] found id: ""
	I0115 10:43:43.787635   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:43.787676   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.792209   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:43.792271   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:43.838530   46388 cri.go:89] found id: ""
	I0115 10:43:43.838557   46388 logs.go:284] 0 containers: []
	W0115 10:43:43.838568   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:43.838576   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:43.838636   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:43.885727   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:43.885755   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:43.885761   46388 cri.go:89] found id: ""
	I0115 10:43:43.885769   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:43.885822   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.891036   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.895462   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:43.895493   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.939544   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:43.939568   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.985944   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:43.985973   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:44.052893   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:44.052923   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:44.116539   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:44.116569   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:44.173390   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:44.173432   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:44.194269   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:44.194295   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:44.239908   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:44.239935   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:44.729495   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:46.231080   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:46.231100   46387 pod_ready.go:81] duration metric: took 3.50881186s waiting for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:46.231109   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:48.239378   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:44.737413   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:44.737445   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:44.891846   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:44.891875   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:44.951418   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:44.951453   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:45.000171   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:45.000201   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:45.041629   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:45.041657   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.586439   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:47.602078   46388 api_server.go:72] duration metric: took 4m14.792413378s to wait for apiserver process to appear ...
	I0115 10:43:47.602102   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:47.602138   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:47.602193   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:47.646259   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:47.646283   46388 cri.go:89] found id: ""
	I0115 10:43:47.646291   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:47.646346   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.650757   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:47.650830   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:47.691688   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.691715   46388 cri.go:89] found id: ""
	I0115 10:43:47.691724   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:47.691777   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.696380   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:47.696467   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:47.738315   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:47.738340   46388 cri.go:89] found id: ""
	I0115 10:43:47.738349   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:47.738402   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.742810   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:47.742870   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:47.783082   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:47.783114   46388 cri.go:89] found id: ""
	I0115 10:43:47.783124   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:47.783178   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.787381   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:47.787432   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:47.832325   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:47.832353   46388 cri.go:89] found id: ""
	I0115 10:43:47.832363   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:47.832420   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.836957   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:47.837014   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:47.877146   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:47.877169   46388 cri.go:89] found id: ""
	I0115 10:43:47.877178   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:47.877231   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.881734   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:47.881782   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:47.921139   46388 cri.go:89] found id: ""
	I0115 10:43:47.921169   46388 logs.go:284] 0 containers: []
	W0115 10:43:47.921180   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:47.921188   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:47.921236   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:47.959829   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:47.959857   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:47.959864   46388 cri.go:89] found id: ""
	I0115 10:43:47.959872   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:47.959924   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.964105   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.968040   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:47.968059   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:48.017234   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:48.017266   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:48.073552   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:48.073583   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:48.512500   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:48.512539   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:48.564545   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:48.564578   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:48.609739   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:48.609768   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:48.654076   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:48.654106   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:48.691287   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:48.691314   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:48.739023   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:48.739063   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:48.791976   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:48.792018   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:48.808633   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:48.808659   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:48.933063   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:48.933099   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:48.974794   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:48.974825   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:49.735197   46387 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735227   46387 pod_ready.go:81] duration metric: took 3.504112323s waiting for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:49.735237   46387 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735243   46387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740497   46387 pod_ready.go:92] pod "kube-proxy-lh96p" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:49.740515   46387 pod_ready.go:81] duration metric: took 5.267229ms waiting for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740525   46387 pod_ready.go:38] duration metric: took 7.083874855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:49.740537   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:49.740580   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:49.755697   46387 api_server.go:72] duration metric: took 8.149828702s to wait for apiserver process to appear ...
	I0115 10:43:49.755718   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:49.755731   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:43:49.762148   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:43:49.762995   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:43:49.763013   46387 api_server.go:131] duration metric: took 7.290279ms to wait for apiserver health ...
	I0115 10:43:49.763019   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:49.766597   46387 system_pods.go:59] 4 kube-system pods found
	I0115 10:43:49.766615   46387 system_pods.go:61] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.766620   46387 system_pods.go:61] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.766626   46387 system_pods.go:61] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.766631   46387 system_pods.go:61] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.766637   46387 system_pods.go:74] duration metric: took 3.613036ms to wait for pod list to return data ...
	I0115 10:43:49.766642   46387 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:49.768826   46387 default_sa.go:45] found service account: "default"
	I0115 10:43:49.768844   46387 default_sa.go:55] duration metric: took 2.197235ms for default service account to be created ...
	I0115 10:43:49.768850   46387 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:49.772271   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:49.772296   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.772304   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.772314   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.772321   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.772339   46387 retry.go:31] will retry after 223.439669ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.001140   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.001165   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.001170   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.001176   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.001181   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.001198   46387 retry.go:31] will retry after 329.400473ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.335362   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.335386   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.335391   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.335398   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.335403   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.335420   46387 retry.go:31] will retry after 466.919302ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.806617   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.806643   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.806649   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.806655   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.806660   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.806678   46387 retry.go:31] will retry after 596.303035ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.407231   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:51.407257   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:51.407264   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:51.407271   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:51.407275   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:51.407292   46387 retry.go:31] will retry after 688.903723ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.102330   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.102357   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.102364   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.102374   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.102382   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.102399   46387 retry.go:31] will retry after 817.783297ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.925586   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.925612   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.925620   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.925629   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.925636   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.925658   46387 retry.go:31] will retry after 797.004884ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:53.728788   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:53.728812   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:53.728817   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:53.728823   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:53.728827   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:53.728843   46387 retry.go:31] will retry after 1.021568746s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.528236   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:43:51.533236   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:43:51.534697   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:43:51.534714   46388 api_server.go:131] duration metric: took 3.932606059s to wait for apiserver health ...
	I0115 10:43:51.534721   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:51.534744   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:51.534796   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:51.571704   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.571730   46388 cri.go:89] found id: ""
	I0115 10:43:51.571740   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:51.571793   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.576140   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:51.576201   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:51.614720   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:51.614803   46388 cri.go:89] found id: ""
	I0115 10:43:51.614823   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:51.614909   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.620904   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:51.620966   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:51.659679   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.659711   46388 cri.go:89] found id: ""
	I0115 10:43:51.659721   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:51.659779   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.664223   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:51.664275   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:51.701827   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:51.701850   46388 cri.go:89] found id: ""
	I0115 10:43:51.701858   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:51.701915   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.707296   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:51.707354   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:51.745962   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:51.745989   46388 cri.go:89] found id: ""
	I0115 10:43:51.746006   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:51.746061   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.750872   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:51.750942   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:51.796600   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:51.796637   46388 cri.go:89] found id: ""
	I0115 10:43:51.796647   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:51.796697   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.801250   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:51.801321   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:51.845050   46388 cri.go:89] found id: ""
	I0115 10:43:51.845072   46388 logs.go:284] 0 containers: []
	W0115 10:43:51.845081   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:51.845087   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:51.845144   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:51.880907   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:51.880935   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:51.880942   46388 cri.go:89] found id: ""
	I0115 10:43:51.880951   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:51.880997   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.885202   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.889086   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:51.889108   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.939740   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:51.939770   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.977039   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:51.977068   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:52.024927   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:52.024960   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:52.071850   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:52.071882   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:52.123313   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:52.123343   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:52.137274   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:52.137297   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:52.260488   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:52.260525   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:52.301121   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:52.301156   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:52.346323   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:52.346349   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:52.402759   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:52.402788   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:52.457075   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:52.457103   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:52.811321   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:52.811359   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:55.374293   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:55.374327   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.374335   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.374342   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.374348   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.374354   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.374361   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.374371   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.374382   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.374394   46388 system_pods.go:74] duration metric: took 3.83966542s to wait for pod list to return data ...
	I0115 10:43:55.374407   46388 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:55.376812   46388 default_sa.go:45] found service account: "default"
	I0115 10:43:55.376833   46388 default_sa.go:55] duration metric: took 2.418755ms for default service account to be created ...
	I0115 10:43:55.376843   46388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:55.383202   46388 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:55.383227   46388 system_pods.go:89] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.383236   46388 system_pods.go:89] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.383244   46388 system_pods.go:89] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.383285   46388 system_pods.go:89] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.383297   46388 system_pods.go:89] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.383303   46388 system_pods.go:89] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.383314   46388 system_pods.go:89] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.383325   46388 system_pods.go:89] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.383338   46388 system_pods.go:126] duration metric: took 6.489813ms to wait for k8s-apps to be running ...
	I0115 10:43:55.383349   46388 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:55.383401   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:55.399074   46388 system_svc.go:56] duration metric: took 15.719638ms WaitForService to wait for kubelet.
	I0115 10:43:55.399096   46388 kubeadm.go:581] duration metric: took 4m22.589439448s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:55.399118   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:55.403855   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:55.403883   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:55.403896   46388 node_conditions.go:105] duration metric: took 4.771651ms to run NodePressure ...
	I0115 10:43:55.403908   46388 start.go:228] waiting for startup goroutines ...
	I0115 10:43:55.403917   46388 start.go:233] waiting for cluster config update ...
	I0115 10:43:55.403930   46388 start.go:242] writing updated cluster config ...
	I0115 10:43:55.404244   46388 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:55.453146   46388 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0115 10:43:55.455321   46388 out.go:177] * Done! kubectl is now configured to use "no-preload-824502" cluster and "default" namespace by default
	I0115 10:43:54.756077   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:54.756099   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:54.756104   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:54.756111   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:54.756116   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:54.756131   46387 retry.go:31] will retry after 1.152306172s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:55.913769   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:55.913792   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:55.913798   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:55.913804   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.913810   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:55.913826   46387 retry.go:31] will retry after 2.261296506s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:58.179679   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:58.179704   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:58.179710   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:58.179718   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:58.179722   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:58.179739   46387 retry.go:31] will retry after 2.012023518s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:00.197441   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:00.197471   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:00.197476   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:00.197483   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:00.197487   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:00.197505   46387 retry.go:31] will retry after 3.341619522s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:03.543730   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:03.543752   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:03.543757   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:03.543766   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:03.543771   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:03.543788   46387 retry.go:31] will retry after 2.782711895s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:06.332250   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:06.332276   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:06.332281   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:06.332288   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:06.332294   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:06.332310   46387 retry.go:31] will retry after 5.379935092s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:11.718269   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:11.718315   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:11.718324   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:11.718334   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:11.718343   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:11.718364   46387 retry.go:31] will retry after 6.238812519s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:17.963126   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:17.963150   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:17.963155   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:17.963162   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:17.963167   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:17.963183   46387 retry.go:31] will retry after 7.774120416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:25.743164   46387 system_pods.go:86] 6 kube-system pods found
	I0115 10:44:25.743190   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:25.743196   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:25.743200   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:25.743204   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:25.743210   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:25.743214   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:25.743231   46387 retry.go:31] will retry after 8.584433466s: missing components: kube-apiserver, kube-scheduler
	I0115 10:44:34.335720   46387 system_pods.go:86] 7 kube-system pods found
	I0115 10:44:34.335751   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:34.335759   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:34.335777   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:34.335785   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:34.335793   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:34.335801   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:34.335815   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:34.335834   46387 retry.go:31] will retry after 13.073630932s: missing components: kube-apiserver
	I0115 10:44:47.415277   46387 system_pods.go:86] 8 kube-system pods found
	I0115 10:44:47.415304   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:47.415311   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:47.415318   46387 system_pods.go:89] "kube-apiserver-old-k8s-version-206509" [e708ba3e-5deb-4b60-ab5b-52c4d671fa46] Running
	I0115 10:44:47.415326   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:47.415332   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:47.415339   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:47.415349   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:47.415355   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:47.415371   46387 system_pods.go:126] duration metric: took 57.64651504s to wait for k8s-apps to be running ...
	I0115 10:44:47.415382   46387 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:44:47.415444   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:44:47.433128   46387 system_svc.go:56] duration metric: took 17.740925ms WaitForService to wait for kubelet.
	I0115 10:44:47.433150   46387 kubeadm.go:581] duration metric: took 1m5.827285253s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:44:47.433174   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:44:47.435664   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:44:47.435685   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:44:47.435695   46387 node_conditions.go:105] duration metric: took 2.516113ms to run NodePressure ...
	I0115 10:44:47.435708   46387 start.go:228] waiting for startup goroutines ...
	I0115 10:44:47.435716   46387 start.go:233] waiting for cluster config update ...
	I0115 10:44:47.435728   46387 start.go:242] writing updated cluster config ...
	I0115 10:44:47.436091   46387 ssh_runner.go:195] Run: rm -f paused
	I0115 10:44:47.492053   46387 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0115 10:44:47.494269   46387 out.go:177] 
	W0115 10:44:47.495828   46387 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0115 10:44:47.497453   46387 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0115 10:44:47.498880   46387 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-206509" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 10:38:22 UTC, ends at Mon 2024-01-15 10:52:24 UTC. --
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.037995990Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:872614188e424c68f8544d6d3b4d129e26a127481dfaa6e658f7b710a782fa06,Metadata:&PodSandboxMetadata{Name:busybox,Uid:8a87a22c-0769-4d2b-9e34-04682f1975ea,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315144979310543,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a87a22c-0769-4d2b-9e34-04682f1975ea,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:38:56.999007635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6de38c7f39c76235b94888d1d6774b6bcbdccf73d0ea139d4c7b2afba9c0f22,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-dzd2f,Uid:0d078727-4275-4308-9206-b471ce7aa586,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:170531
5144974852578,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-dzd2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d078727-4275-4308-9206-b471ce7aa586,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:38:56.999003031Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d64a30310616e786c9bf78ca449a84b546f63728fa7900193b233d211bb9bbc0,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-qpb25,Uid:3f101dc0-1411-4554-a46a-7d829f2345ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315141066774531,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-qpb25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f101dc0-1411-4554-a46a-7d829f2345ad,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15
T10:38:56.999006734Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee835a4d0288441cf11f407222f006052c6d629bc11183a85cdc330cebadafd1,Metadata:&PodSandboxMetadata{Name:kube-proxy-d8lcq,Uid:9e68bc58-e11b-4534-9164-eb1b115b1721,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315137347741011,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-d8lcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e68bc58-e11b-4534-9164-eb1b115b1721,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:38:56.998988838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8a0c2885-50ff-40e4-bd6d-624f33f45c9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315137330092482,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-01-15T10:38:56.999001869Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66c48b48683c99d5068d56ed106df3e5f7f6e834aead734e1159392d47e68c67,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-709012,Uid:f9463f414b3141e35d9e5ee6b8849a92,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315130539419594,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9463f414b3141e35d9e5ee6b8849a92,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f9463f414b3141e35d9e5ee6b8849a92,kubernetes.io/config.seen: 2024-01-15T10:38:49.984124187Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c8580dda7b40819e74bf6f95fae2d4961417c540cc83aae479676326a12da494,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-709012,Uid:
0af996f03f060971a07c47ab7207a249,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315130524038303,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af996f03f060971a07c47ab7207a249,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.125:8444,kubernetes.io/config.hash: 0af996f03f060971a07c47ab7207a249,kubernetes.io/config.seen: 2024-01-15T10:38:49.984121632Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:92fecba08bfb9f159db945e1f104c4da980603343ccc4338778f92db9d3ba87c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-709012,Uid:c57f9ebf45379653db2ca34fe521c184,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315130478897337,Labels:map[string]string{component: kube-controller-manager,
io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c57f9ebf45379653db2ca34fe521c184,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c57f9ebf45379653db2ca34fe521c184,kubernetes.io/config.seen: 2024-01-15T10:38:49.984123065Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b53717eff7abcea451cd24470987c6568f3df4e69937de8feb9778733f2b5018,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-709012,Uid:585f9295812ba39422526be195c682df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315130475182535,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585f9295812ba39422526be195c682df,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clie
nt-urls: https://192.168.39.125:2379,kubernetes.io/config.hash: 585f9295812ba39422526be195c682df,kubernetes.io/config.seen: 2024-01-15T10:38:49.984117306Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=8ac64189-f40f-4a52-9da1-89171c81e1ec name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.038727446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=34176b15-edf0-4f08-b840-7d45f4f3cc2d name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.038798329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=34176b15-edf0-4f08-b840-7d45f4f3cc2d name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.038969908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315169264901792,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb769c7c010d829f4a23377df93c90e8bf1c5599a00fa995b9e52c91ccd0a71,PodSandboxId:872614188e424c68f8544d6d3b4d129e26a127481dfaa6e658f7b710a782fa06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315146835872252,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a87a22c-0769-4d2b-9e34-04682f1975ea,},Annotations:map[string]string{io.kubernetes.container.hash: c471276c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a,PodSandboxId:f6de38c7f39c76235b94888d1d6774b6bcbdccf73d0ea139d4c7b2afba9c0f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315145691263405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dzd2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d078727-4275-4308-9206-b471ce7aa586,},Annotations:map[string]string{io.kubernetes.container.hash: c46c6fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f,PodSandboxId:ee835a4d0288441cf11f407222f006052c6d629bc11183a85cdc330cebadafd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315138144703821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d8lcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9e68bc58-e11b-4534-9164-eb1b115b1721,},Annotations:map[string]string{io.kubernetes.container.hash: efdf6691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315138066104909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022,PodSandboxId:66c48b48683c99d5068d56ed106df3e5f7f6e834aead734e1159392d47e68c67,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315131889306823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9463f414b3141e35d9e5ee6b8849a92,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8,PodSandboxId:b53717eff7abcea451cd24470987c6568f3df4e69937de8feb9778733f2b5018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315131394714539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585f9295812ba39422526be195c682df,},An
notations:map[string]string{io.kubernetes.container.hash: 5a6c0eb7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045,PodSandboxId:92fecba08bfb9f159db945e1f104c4da980603343ccc4338778f92db9d3ba87c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315131301555229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
57f9ebf45379653db2ca34fe521c184,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6,PodSandboxId:c8580dda7b40819e74bf6f95fae2d4961417c540cc83aae479676326a12da494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315131282069604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
af996f03f060971a07c47ab7207a249,},Annotations:map[string]string{io.kubernetes.container.hash: 93acb490,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=34176b15-edf0-4f08-b840-7d45f4f3cc2d name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.066677645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4aeaa6db-4e8b-4b94-8bdd-99b6bae7ea4d name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.066780003Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4aeaa6db-4e8b-4b94-8bdd-99b6bae7ea4d name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.068743042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e632370b-ed8b-45fc-a5ef-683df9977f31 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.069222230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705315944069207093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e632370b-ed8b-45fc-a5ef-683df9977f31 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.070174229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=34f5a80d-0345-4379-90e4-9e0c3ab16ba6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.070244693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=34f5a80d-0345-4379-90e4-9e0c3ab16ba6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.070511628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315169264901792,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb769c7c010d829f4a23377df93c90e8bf1c5599a00fa995b9e52c91ccd0a71,PodSandboxId:872614188e424c68f8544d6d3b4d129e26a127481dfaa6e658f7b710a782fa06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315146835872252,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a87a22c-0769-4d2b-9e34-04682f1975ea,},Annotations:map[string]string{io.kubernetes.container.hash: c471276c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a,PodSandboxId:f6de38c7f39c76235b94888d1d6774b6bcbdccf73d0ea139d4c7b2afba9c0f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315145691263405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dzd2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d078727-4275-4308-9206-b471ce7aa586,},Annotations:map[string]string{io.kubernetes.container.hash: c46c6fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f,PodSandboxId:ee835a4d0288441cf11f407222f006052c6d629bc11183a85cdc330cebadafd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315138144703821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d8lcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9e68bc58-e11b-4534-9164-eb1b115b1721,},Annotations:map[string]string{io.kubernetes.container.hash: efdf6691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315138066104909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022,PodSandboxId:66c48b48683c99d5068d56ed106df3e5f7f6e834aead734e1159392d47e68c67,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315131889306823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9463f414b3141e35d9e5ee6b8849a92,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8,PodSandboxId:b53717eff7abcea451cd24470987c6568f3df4e69937de8feb9778733f2b5018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315131394714539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585f9295812ba39422526be195c682df,},An
notations:map[string]string{io.kubernetes.container.hash: 5a6c0eb7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045,PodSandboxId:92fecba08bfb9f159db945e1f104c4da980603343ccc4338778f92db9d3ba87c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315131301555229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
57f9ebf45379653db2ca34fe521c184,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6,PodSandboxId:c8580dda7b40819e74bf6f95fae2d4961417c540cc83aae479676326a12da494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315131282069604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
af996f03f060971a07c47ab7207a249,},Annotations:map[string]string{io.kubernetes.container.hash: 93acb490,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=34f5a80d-0345-4379-90e4-9e0c3ab16ba6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.111392027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d4437e9d-9d15-42ab-8d34-05e9dfaae56d name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.111506237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d4437e9d-9d15-42ab-8d34-05e9dfaae56d name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.113097502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d0dd7574-4287-438c-95ba-31e05c4cb38e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.113622438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705315944113605713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d0dd7574-4287-438c-95ba-31e05c4cb38e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.114236217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6246405e-cd5f-4729-870c-ccf650a8ffee name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.114306061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6246405e-cd5f-4729-870c-ccf650a8ffee name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.114560051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315169264901792,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb769c7c010d829f4a23377df93c90e8bf1c5599a00fa995b9e52c91ccd0a71,PodSandboxId:872614188e424c68f8544d6d3b4d129e26a127481dfaa6e658f7b710a782fa06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315146835872252,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a87a22c-0769-4d2b-9e34-04682f1975ea,},Annotations:map[string]string{io.kubernetes.container.hash: c471276c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a,PodSandboxId:f6de38c7f39c76235b94888d1d6774b6bcbdccf73d0ea139d4c7b2afba9c0f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315145691263405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dzd2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d078727-4275-4308-9206-b471ce7aa586,},Annotations:map[string]string{io.kubernetes.container.hash: c46c6fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f,PodSandboxId:ee835a4d0288441cf11f407222f006052c6d629bc11183a85cdc330cebadafd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315138144703821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d8lcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9e68bc58-e11b-4534-9164-eb1b115b1721,},Annotations:map[string]string{io.kubernetes.container.hash: efdf6691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315138066104909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022,PodSandboxId:66c48b48683c99d5068d56ed106df3e5f7f6e834aead734e1159392d47e68c67,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315131889306823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9463f414b3141e35d9e5ee6b8849a92,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8,PodSandboxId:b53717eff7abcea451cd24470987c6568f3df4e69937de8feb9778733f2b5018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315131394714539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585f9295812ba39422526be195c682df,},An
notations:map[string]string{io.kubernetes.container.hash: 5a6c0eb7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045,PodSandboxId:92fecba08bfb9f159db945e1f104c4da980603343ccc4338778f92db9d3ba87c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315131301555229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
57f9ebf45379653db2ca34fe521c184,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6,PodSandboxId:c8580dda7b40819e74bf6f95fae2d4961417c540cc83aae479676326a12da494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315131282069604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
af996f03f060971a07c47ab7207a249,},Annotations:map[string]string{io.kubernetes.container.hash: 93acb490,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6246405e-cd5f-4729-870c-ccf650a8ffee name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.152979587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1d4c3d94-9012-4dbf-87e5-b0ca8a379db8 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.153061975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1d4c3d94-9012-4dbf-87e5-b0ca8a379db8 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.155387545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=927c3da5-ff41-496d-ad76-17f01c94dcf7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.155917769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705315944155901066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=927c3da5-ff41-496d-ad76-17f01c94dcf7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.156687742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3ad9e1c3-e45f-42e4-8d37-3b78f2a95dab name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.156788034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3ad9e1c3-e45f-42e4-8d37-3b78f2a95dab name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:24 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 10:52:24.156998208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315169264901792,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb769c7c010d829f4a23377df93c90e8bf1c5599a00fa995b9e52c91ccd0a71,PodSandboxId:872614188e424c68f8544d6d3b4d129e26a127481dfaa6e658f7b710a782fa06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315146835872252,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a87a22c-0769-4d2b-9e34-04682f1975ea,},Annotations:map[string]string{io.kubernetes.container.hash: c471276c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a,PodSandboxId:f6de38c7f39c76235b94888d1d6774b6bcbdccf73d0ea139d4c7b2afba9c0f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315145691263405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dzd2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d078727-4275-4308-9206-b471ce7aa586,},Annotations:map[string]string{io.kubernetes.container.hash: c46c6fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f,PodSandboxId:ee835a4d0288441cf11f407222f006052c6d629bc11183a85cdc330cebadafd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315138144703821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d8lcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9e68bc58-e11b-4534-9164-eb1b115b1721,},Annotations:map[string]string{io.kubernetes.container.hash: efdf6691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315138066104909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022,PodSandboxId:66c48b48683c99d5068d56ed106df3e5f7f6e834aead734e1159392d47e68c67,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315131889306823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9463f414b3141e35d9e5ee6b8849a92,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8,PodSandboxId:b53717eff7abcea451cd24470987c6568f3df4e69937de8feb9778733f2b5018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315131394714539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585f9295812ba39422526be195c682df,},An
notations:map[string]string{io.kubernetes.container.hash: 5a6c0eb7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045,PodSandboxId:92fecba08bfb9f159db945e1f104c4da980603343ccc4338778f92db9d3ba87c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315131301555229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
57f9ebf45379653db2ca34fe521c184,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6,PodSandboxId:c8580dda7b40819e74bf6f95fae2d4961417c540cc83aae479676326a12da494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315131282069604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
af996f03f060971a07c47ab7207a249,},Annotations:map[string]string{io.kubernetes.container.hash: 93acb490,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3ad9e1c3-e45f-42e4-8d37-3b78f2a95dab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff6b807e1af7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   9411d2b23ff86       storage-provisioner
	8fb769c7c010d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   872614188e424       busybox
	d7bf892409a21       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   f6de38c7f39c7       coredns-5dd5756b68-dzd2f
	7836dc2548675       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   ee835a4d02884       kube-proxy-d8lcq
	9af5ff2ded14a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   9411d2b23ff86       storage-provisioner
	71abda814d83c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   66c48b48683c9       kube-scheduler-default-k8s-diff-port-709012
	16df79e79d4d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   b53717eff7abc       etcd-default-k8s-diff-port-709012
	5f5ae904a7af1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   92fecba08bfb9       kube-controller-manager-default-k8s-diff-port-709012
	9a14416fbd453       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   c8580dda7b408       kube-apiserver-default-k8s-diff-port-709012
	
	
	==> coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32938 - 8366 "HINFO IN 7933565490702889080.8938532282615614641. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028540689s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-709012
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-709012
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=default-k8s-diff-port-709012
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T10_31_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:31:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-709012
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 10:52:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:49:40 +0000   Mon, 15 Jan 2024 10:31:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:49:40 +0000   Mon, 15 Jan 2024 10:31:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:49:40 +0000   Mon, 15 Jan 2024 10:31:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:49:40 +0000   Mon, 15 Jan 2024 10:39:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    default-k8s-diff-port-709012
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 24585c0896e64350a08959541c747c05
	  System UUID:                24585c08-96e6-4350-a089-59541c747c05
	  Boot ID:                    977e9528-e135-4755-ab18-3d90ca37c59d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-dzd2f                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-709012                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-709012             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-709012    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-d8lcq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-709012             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-qpb25                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-709012 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-709012 event: Registered Node default-k8s-diff-port-709012 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-709012 event: Registered Node default-k8s-diff-port-709012 in Controller
	
	
	==> dmesg <==
	[Jan15 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073941] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.645696] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.294636] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152078] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.642593] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.292277] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.141094] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.222371] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.153547] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.276154] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +17.661926] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[Jan15 10:39] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] <==
	{"level":"info","ts":"2024-01-15T10:38:53.441692Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T10:38:53.45728Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-15T10:38:53.4578Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-01-15T10:38:53.458162Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-01-15T10:38:53.460506Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-15T10:38:53.460665Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f4d3edba9e42b28c","initial-advertise-peer-urls":["https://192.168.39.125:2380"],"listen-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-15T10:38:54.386524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-15T10:38:54.386665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-15T10:38:54.386718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2024-01-15T10:38:54.386748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 3"}
	{"level":"info","ts":"2024-01-15T10:38:54.386773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2024-01-15T10:38:54.3868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 3"}
	{"level":"info","ts":"2024-01-15T10:38:54.386825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2024-01-15T10:38:54.39574Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:default-k8s-diff-port-709012 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T10:38:54.395832Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T10:38:54.396998Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"info","ts":"2024-01-15T10:38:54.397176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T10:38:54.397976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T10:38:54.403585Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T10:38:54.403632Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-01-15T10:38:58.589097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.68291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-709012\" ","response":"range_response_count:1 size:5728"}
	{"level":"info","ts":"2024-01-15T10:38:58.589285Z","caller":"traceutil/trace.go:171","msg":"trace[1559917219] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-709012; range_end:; response_count:1; response_revision:571; }","duration":"172.882823ms","start":"2024-01-15T10:38:58.41639Z","end":"2024-01-15T10:38:58.589273Z","steps":["trace[1559917219] 'range keys from in-memory index tree'  (duration: 172.527665ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:48:54.461945Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2024-01-15T10:48:54.464864Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":860,"took":"2.309619ms","hash":2545533200}
	{"level":"info","ts":"2024-01-15T10:48:54.464958Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2545533200,"revision":860,"compact-revision":-1}
	
	
	==> kernel <==
	 10:52:24 up 14 min,  0 users,  load average: 0.21, 0.16, 0.10
	Linux default-k8s-diff-port-709012 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] <==
	I0115 10:48:56.291637       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:48:57.292330       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:48:57.292424       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:48:57.292487       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:48:57.292522       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:48:57.292561       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:48:57.293632       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:49:56.103922       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:49:57.293676       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:49:57.293731       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:49:57.293741       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:49:57.293798       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:49:57.293808       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:49:57.294850       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:50:56.103630       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0115 10:51:56.104410       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:51:57.294636       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:51:57.294713       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:51:57.294727       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:51:57.295822       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:51:57.295883       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:51:57.295909       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] <==
	I0115 10:46:40.070520       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:47:09.557154       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:47:10.081683       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:47:39.562575       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:47:40.091076       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:48:09.568734       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:48:10.099136       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:48:39.575773       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:48:40.108419       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:49:09.588050       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:49:10.122147       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:49:39.593091       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:49:40.131816       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0115 10:49:59.049543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="362.184µs"
	E0115 10:50:09.598824       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:50:10.140103       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0115 10:50:13.049235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="247.267µs"
	E0115 10:50:39.605287       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:50:40.149022       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:51:09.617725       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:51:10.157662       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:51:39.623296       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:51:40.166916       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:52:09.628691       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:52:10.177334       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] <==
	I0115 10:38:58.407200       1 server_others.go:69] "Using iptables proxy"
	I0115 10:38:58.592635       1 node.go:141] Successfully retrieved node IP: 192.168.39.125
	I0115 10:38:58.647569       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 10:38:58.647656       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 10:38:58.651018       1 server_others.go:152] "Using iptables Proxier"
	I0115 10:38:58.651111       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 10:38:58.651630       1 server.go:846] "Version info" version="v1.28.4"
	I0115 10:38:58.651736       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:38:58.652834       1 config.go:188] "Starting service config controller"
	I0115 10:38:58.652895       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 10:38:58.652943       1 config.go:97] "Starting endpoint slice config controller"
	I0115 10:38:58.652966       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 10:38:58.654036       1 config.go:315] "Starting node config controller"
	I0115 10:38:58.654079       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 10:38:58.753686       1 shared_informer.go:318] Caches are synced for service config
	I0115 10:38:58.753711       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 10:38:58.754416       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] <==
	I0115 10:38:54.089947       1 serving.go:348] Generated self-signed cert in-memory
	W0115 10:38:56.207133       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0115 10:38:56.207264       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 10:38:56.207303       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0115 10:38:56.207328       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0115 10:38:56.305000       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0115 10:38:56.308357       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:38:56.325538       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0115 10:38:56.325593       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 10:38:56.335374       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0115 10:38:56.335542       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0115 10:38:56.427685       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 10:38:22 UTC, ends at Mon 2024-01-15 10:52:24 UTC. --
	Jan 15 10:49:45 default-k8s-diff-port-709012 kubelet[920]: E0115 10:49:45.044278     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:49:50 default-k8s-diff-port-709012 kubelet[920]: E0115 10:49:50.051996     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:49:50 default-k8s-diff-port-709012 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:49:50 default-k8s-diff-port-709012 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:49:50 default-k8s-diff-port-709012 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:49:59 default-k8s-diff-port-709012 kubelet[920]: E0115 10:49:59.033681     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:50:13 default-k8s-diff-port-709012 kubelet[920]: E0115 10:50:13.033598     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:50:24 default-k8s-diff-port-709012 kubelet[920]: E0115 10:50:24.033485     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:50:35 default-k8s-diff-port-709012 kubelet[920]: E0115 10:50:35.033382     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:50:47 default-k8s-diff-port-709012 kubelet[920]: E0115 10:50:47.035606     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:50:50 default-k8s-diff-port-709012 kubelet[920]: E0115 10:50:50.050245     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:50:50 default-k8s-diff-port-709012 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:50:50 default-k8s-diff-port-709012 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:50:50 default-k8s-diff-port-709012 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:51:01 default-k8s-diff-port-709012 kubelet[920]: E0115 10:51:01.033809     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:51:13 default-k8s-diff-port-709012 kubelet[920]: E0115 10:51:13.033902     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:51:24 default-k8s-diff-port-709012 kubelet[920]: E0115 10:51:24.033906     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:51:37 default-k8s-diff-port-709012 kubelet[920]: E0115 10:51:37.033297     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:51:50 default-k8s-diff-port-709012 kubelet[920]: E0115 10:51:50.034237     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:51:50 default-k8s-diff-port-709012 kubelet[920]: E0115 10:51:50.050791     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:51:50 default-k8s-diff-port-709012 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:51:50 default-k8s-diff-port-709012 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:51:50 default-k8s-diff-port-709012 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:52:04 default-k8s-diff-port-709012 kubelet[920]: E0115 10:52:04.034813     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:52:19 default-k8s-diff-port-709012 kubelet[920]: E0115 10:52:19.033620     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	
	
	==> storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] <==
	I0115 10:38:58.292863       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0115 10:39:28.295049       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] <==
	I0115 10:39:29.409147       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 10:39:29.426302       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 10:39:29.426379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 10:39:46.832680       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 10:39:46.835409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-709012_cc01d9de-fa0f-4c8d-9153-8cd977e0392d!
	I0115 10:39:46.835243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4df36283-0c04-4d23-ae3d-a2d9fc710156", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-709012_cc01d9de-fa0f-4c8d-9153-8cd977e0392d became leader
	I0115 10:39:46.936397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-709012_cc01d9de-fa0f-4c8d-9153-8cd977e0392d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-709012 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qpb25
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-709012 describe pod metrics-server-57f55c9bc5-qpb25
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-709012 describe pod metrics-server-57f55c9bc5-qpb25: exit status 1 (66.188835ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qpb25" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-709012 describe pod metrics-server-57f55c9bc5-qpb25: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0115 10:44:12.883995   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:44:21.452345   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-824502 -n no-preload-824502
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-15 10:52:56.054099995 +0000 UTC m=+5188.025267404
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824502 -n no-preload-824502
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-824502 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-824502 logs -n 25: (1.668356464s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-967423 -- sudo                         | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-967423                                 | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-317803                           | kubernetes-upgrade-317803    | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-824502             | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-206509        | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-781270            | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-802186 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | disable-driver-mounts-802186                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:32 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-709012  | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-206509             | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-824502                  | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-781270                 | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:33 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-709012       | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC | 15 Jan 24 10:43 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:34:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:34:59.863813   47063 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:34:59.864093   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864103   47063 out.go:309] Setting ErrFile to fd 2...
	I0115 10:34:59.864108   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864345   47063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:34:59.864916   47063 out.go:303] Setting JSON to false
	I0115 10:34:59.865821   47063 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4600,"bootTime":1705310300,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 10:34:59.865878   47063 start.go:138] virtualization: kvm guest
	I0115 10:34:59.868392   47063 out.go:177] * [default-k8s-diff-port-709012] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 10:34:59.869886   47063 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 10:34:59.869920   47063 notify.go:220] Checking for updates...
	I0115 10:34:59.871289   47063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:34:59.872699   47063 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:34:59.874242   47063 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:34:59.875739   47063 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 10:34:59.877248   47063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 10:34:59.879143   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:34:59.879618   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.879682   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.893745   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I0115 10:34:59.894091   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.894610   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.894633   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.894933   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.895112   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.895305   47063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:34:59.895579   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.895611   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.909045   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0115 10:34:59.909415   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.909868   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.909886   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.910173   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.910346   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.943453   47063 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 10:34:59.945154   47063 start.go:298] selected driver: kvm2
	I0115 10:34:59.945164   47063 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.945252   47063 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 10:34:59.945926   47063 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.945991   47063 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 10:34:59.959656   47063 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 10:34:59.960028   47063 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 10:34:59.960078   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:34:59.960091   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:34:59.960106   47063 start_flags.go:321] config:
	{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-70901
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.960261   47063 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.962534   47063 out.go:177] * Starting control plane node default-k8s-diff-port-709012 in cluster default-k8s-diff-port-709012
	I0115 10:35:00.734685   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:34:59.963970   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:34:59.964003   47063 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 10:34:59.964012   47063 cache.go:56] Caching tarball of preloaded images
	I0115 10:34:59.964081   47063 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 10:34:59.964090   47063 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:34:59.964172   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:34:59.964356   47063 start.go:365] acquiring machines lock for default-k8s-diff-port-709012: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:35:06.814638   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:09.886665   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:15.966704   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:19.038663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:25.118649   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:28.190674   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:34.270660   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:37.342618   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:43.422663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:46.494729   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:52.574698   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:55.646737   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:01.726677   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:04.798681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:10.878645   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:13.950716   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:20.030691   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:23.102681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:29.182668   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:32.254641   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:38.334686   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:41.406690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:47.486639   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:50.558690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:56.638684   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:59.710581   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:05.790664   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:08.862738   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:14.942615   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:18.014720   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:24.094644   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:27.098209   46387 start.go:369] acquired machines lock for "old-k8s-version-206509" in 4m37.373222591s
	I0115 10:37:27.098259   46387 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:27.098264   46387 fix.go:54] fixHost starting: 
	I0115 10:37:27.098603   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:27.098633   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:27.112818   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37153
	I0115 10:37:27.113206   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:27.113638   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:37:27.113660   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:27.113943   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:27.114126   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:27.114270   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:37:27.115824   46387 fix.go:102] recreateIfNeeded on old-k8s-version-206509: state=Stopped err=<nil>
	I0115 10:37:27.115846   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	W0115 10:37:27.116007   46387 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:27.118584   46387 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-206509" ...
	I0115 10:37:27.119985   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Start
	I0115 10:37:27.120145   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring networks are active...
	I0115 10:37:27.120788   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network default is active
	I0115 10:37:27.121077   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network mk-old-k8s-version-206509 is active
	I0115 10:37:27.121463   46387 main.go:141] libmachine: (old-k8s-version-206509) Getting domain xml...
	I0115 10:37:27.122185   46387 main.go:141] libmachine: (old-k8s-version-206509) Creating domain...
	I0115 10:37:28.295990   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting to get IP...
	I0115 10:37:28.297038   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.297393   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.297470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.297380   47440 retry.go:31] will retry after 254.616903ms: waiting for machine to come up
	I0115 10:37:28.553730   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.554213   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.554238   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.554159   47440 retry.go:31] will retry after 350.995955ms: waiting for machine to come up
	I0115 10:37:28.906750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.907189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.907222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.907146   47440 retry.go:31] will retry after 441.292217ms: waiting for machine to come up
	I0115 10:37:29.349643   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.350011   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.350042   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.349959   47440 retry.go:31] will retry after 544.431106ms: waiting for machine to come up
	I0115 10:37:27.096269   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:27.096303   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:37:27.098084   46388 machine.go:91] provisioned docker machine in 4m37.366643974s
	I0115 10:37:27.098120   46388 fix.go:56] fixHost completed within 4m37.388460167s
	I0115 10:37:27.098126   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 4m37.388479036s
	W0115 10:37:27.098153   46388 start.go:694] error starting host: provision: host is not running
	W0115 10:37:27.098242   46388 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0115 10:37:27.098252   46388 start.go:709] Will try again in 5 seconds ...
	I0115 10:37:29.895609   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.896157   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.896189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.896032   47440 retry.go:31] will retry after 489.420436ms: waiting for machine to come up
	I0115 10:37:30.386614   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:30.387037   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:30.387071   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:30.387005   47440 retry.go:31] will retry after 779.227065ms: waiting for machine to come up
	I0115 10:37:31.167934   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:31.168316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:31.168343   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:31.168273   47440 retry.go:31] will retry after 878.328646ms: waiting for machine to come up
	I0115 10:37:32.048590   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:32.048976   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:32.049001   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:32.048920   47440 retry.go:31] will retry after 1.282650862s: waiting for machine to come up
	I0115 10:37:33.333699   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:33.334132   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:33.334161   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:33.334078   47440 retry.go:31] will retry after 1.548948038s: waiting for machine to come up
	I0115 10:37:32.100253   46388 start.go:365] acquiring machines lock for no-preload-824502: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:37:34.884455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:34.884845   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:34.884866   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:34.884800   47440 retry.go:31] will retry after 1.555315627s: waiting for machine to come up
	I0115 10:37:36.441833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:36.442329   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:36.442352   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:36.442281   47440 retry.go:31] will retry after 1.803564402s: waiting for machine to come up
	I0115 10:37:38.247833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:38.248241   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:38.248283   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:38.248213   47440 retry.go:31] will retry after 3.514521425s: waiting for machine to come up
	I0115 10:37:41.766883   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:41.767187   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:41.767222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:41.767154   47440 retry.go:31] will retry after 4.349871716s: waiting for machine to come up
	I0115 10:37:47.571869   46584 start.go:369] acquired machines lock for "embed-certs-781270" in 4m40.757219204s
	I0115 10:37:47.571928   46584 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:47.571936   46584 fix.go:54] fixHost starting: 
	I0115 10:37:47.572344   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:47.572382   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:47.591532   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0115 10:37:47.591905   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:47.592471   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:37:47.592513   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:47.592835   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:47.593060   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:37:47.593221   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:37:47.594825   46584 fix.go:102] recreateIfNeeded on embed-certs-781270: state=Stopped err=<nil>
	I0115 10:37:47.594856   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	W0115 10:37:47.595015   46584 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:47.597457   46584 out.go:177] * Restarting existing kvm2 VM for "embed-certs-781270" ...
	I0115 10:37:46.118479   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.118936   46387 main.go:141] libmachine: (old-k8s-version-206509) Found IP for machine: 192.168.61.70
	I0115 10:37:46.118960   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserving static IP address...
	I0115 10:37:46.118978   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has current primary IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.119402   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.119425   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserved static IP address: 192.168.61.70
	I0115 10:37:46.119441   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | skip adding static IP to network mk-old-k8s-version-206509 - found existing host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"}
	I0115 10:37:46.119455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Getting to WaitForSSH function...
	I0115 10:37:46.119467   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting for SSH to be available...
	I0115 10:37:46.121874   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122204   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.122236   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122340   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH client type: external
	I0115 10:37:46.122364   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa (-rw-------)
	I0115 10:37:46.122452   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:37:46.122476   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | About to run SSH command:
	I0115 10:37:46.122492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | exit 0
	I0115 10:37:46.214102   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | SSH cmd err, output: <nil>: 
	I0115 10:37:46.214482   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetConfigRaw
	I0115 10:37:46.215064   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.217294   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217579   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.217618   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217784   46387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/config.json ...
	I0115 10:37:46.218001   46387 machine.go:88] provisioning docker machine ...
	I0115 10:37:46.218022   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:46.218242   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218440   46387 buildroot.go:166] provisioning hostname "old-k8s-version-206509"
	I0115 10:37:46.218462   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218593   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.220842   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221188   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.221226   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221374   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.221525   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221662   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221760   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.221905   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.222391   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.222411   46387 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-206509 && echo "old-k8s-version-206509" | sudo tee /etc/hostname
	I0115 10:37:46.354906   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-206509
	
	I0115 10:37:46.354939   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.357679   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358051   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.358089   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358245   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.358470   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358642   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358799   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.358957   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.359291   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.359318   46387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-206509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-206509/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-206509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:37:46.491369   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:46.491397   46387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:37:46.491413   46387 buildroot.go:174] setting up certificates
	I0115 10:37:46.491422   46387 provision.go:83] configureAuth start
	I0115 10:37:46.491430   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.491687   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.494369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.494779   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494863   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.496985   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497338   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.497368   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497537   46387 provision.go:138] copyHostCerts
	I0115 10:37:46.497598   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:37:46.497613   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:37:46.497694   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:37:46.497806   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:37:46.497818   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:37:46.497848   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:37:46.497925   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:37:46.497945   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:37:46.497982   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:37:46.498043   46387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-206509 san=[192.168.61.70 192.168.61.70 localhost 127.0.0.1 minikube old-k8s-version-206509]
	I0115 10:37:46.824648   46387 provision.go:172] copyRemoteCerts
	I0115 10:37:46.824702   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:37:46.824723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.827470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827785   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.827818   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827972   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.828174   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.828336   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.828484   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:46.919822   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:37:46.941728   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:37:46.963042   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0115 10:37:46.983757   46387 provision.go:86] duration metric: configureAuth took 492.325875ms
	I0115 10:37:46.983777   46387 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:37:46.983966   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:37:46.984048   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.986525   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.986843   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.986869   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.987107   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.987323   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987503   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987651   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.987795   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.988198   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.988219   46387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:37:47.308225   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:37:47.308256   46387 machine.go:91] provisioned docker machine in 1.090242192s
	I0115 10:37:47.308269   46387 start.go:300] post-start starting for "old-k8s-version-206509" (driver="kvm2")
	I0115 10:37:47.308284   46387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:37:47.308310   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.308641   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:37:47.308674   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.311316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311665   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.311700   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311835   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.312024   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.312190   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.312315   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.407169   46387 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:37:47.411485   46387 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:37:47.411504   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:37:47.411566   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:37:47.411637   46387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:37:47.411715   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:37:47.419976   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:47.446992   46387 start.go:303] post-start completed in 138.700951ms
	I0115 10:37:47.447013   46387 fix.go:56] fixHost completed within 20.348748891s
	I0115 10:37:47.447031   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.449638   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.449996   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.450048   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.450136   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.450309   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450620   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.450749   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:47.451070   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:47.451085   46387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:37:47.571711   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315067.520557177
	
	I0115 10:37:47.571729   46387 fix.go:206] guest clock: 1705315067.520557177
	I0115 10:37:47.571748   46387 fix.go:219] Guest: 2024-01-15 10:37:47.520557177 +0000 UTC Remote: 2024-01-15 10:37:47.447016864 +0000 UTC m=+297.904172196 (delta=73.540313ms)
	I0115 10:37:47.571772   46387 fix.go:190] guest clock delta is within tolerance: 73.540313ms
	I0115 10:37:47.571782   46387 start.go:83] releasing machines lock for "old-k8s-version-206509", held for 20.473537585s
	I0115 10:37:47.571810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.572157   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:47.574952   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575328   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.575366   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.575957   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576146   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576232   46387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:37:47.576273   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.576381   46387 ssh_runner.go:195] Run: cat /version.json
	I0115 10:37:47.576406   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.578863   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579052   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579218   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579248   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579347   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579378   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579385   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579577   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579583   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579775   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.579810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579912   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.580094   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.580316   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.702555   46387 ssh_runner.go:195] Run: systemctl --version
	I0115 10:37:47.708309   46387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:37:47.862103   46387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:37:47.869243   46387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:37:47.869321   46387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:37:47.886013   46387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:37:47.886033   46387 start.go:475] detecting cgroup driver to use...
	I0115 10:37:47.886093   46387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:37:47.901265   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:37:47.913762   46387 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:37:47.913815   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:37:47.926880   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:37:47.942744   46387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:37:48.050667   46387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:37:48.168614   46387 docker.go:233] disabling docker service ...
	I0115 10:37:48.168679   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:37:48.181541   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:37:48.193155   46387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:37:48.312374   46387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:37:48.420624   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:37:48.432803   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:37:48.449232   46387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0115 10:37:48.449292   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.458042   46387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:37:48.458109   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.466909   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.475511   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.484081   46387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:37:48.493186   46387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:37:48.502460   46387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:37:48.502507   46387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:37:48.514913   46387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:37:48.522816   46387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:37:48.630774   46387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:37:48.807089   46387 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:37:48.807170   46387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:37:48.812950   46387 start.go:543] Will wait 60s for crictl version
	I0115 10:37:48.813005   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:48.816919   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:37:48.860058   46387 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:37:48.860143   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.916839   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.968312   46387 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0115 10:37:48.969913   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:48.972776   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973219   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:48.973249   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973519   46387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0115 10:37:48.977593   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:48.990551   46387 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 10:37:48.990613   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:49.030917   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:49.030973   46387 ssh_runner.go:195] Run: which lz4
	I0115 10:37:49.035059   46387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:37:49.039231   46387 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:37:49.039262   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0115 10:37:47.598904   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Start
	I0115 10:37:47.599102   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring networks are active...
	I0115 10:37:47.599886   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network default is active
	I0115 10:37:47.600258   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network mk-embed-certs-781270 is active
	I0115 10:37:47.600652   46584 main.go:141] libmachine: (embed-certs-781270) Getting domain xml...
	I0115 10:37:47.601365   46584 main.go:141] libmachine: (embed-certs-781270) Creating domain...
	I0115 10:37:48.842510   46584 main.go:141] libmachine: (embed-certs-781270) Waiting to get IP...
	I0115 10:37:48.843267   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:48.843637   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:48.843731   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:48.843603   47574 retry.go:31] will retry after 262.69562ms: waiting for machine to come up
	I0115 10:37:49.108361   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.108861   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.108901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.108796   47574 retry.go:31] will retry after 379.820541ms: waiting for machine to come up
	I0115 10:37:49.490343   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.490939   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.490979   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.490898   47574 retry.go:31] will retry after 463.282743ms: waiting for machine to come up
	I0115 10:37:49.956222   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.956694   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.956725   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.956646   47574 retry.go:31] will retry after 539.780461ms: waiting for machine to come up
	I0115 10:37:50.498391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:50.498901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:50.498935   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:50.498849   47574 retry.go:31] will retry after 611.580301ms: waiting for machine to come up
	I0115 10:37:51.111752   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.112228   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.112263   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.112194   47574 retry.go:31] will retry after 837.335782ms: waiting for machine to come up
	I0115 10:37:50.824399   46387 crio.go:444] Took 1.789376 seconds to copy over tarball
	I0115 10:37:50.824466   46387 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:37:53.837707   46387 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013210203s)
	I0115 10:37:53.837742   46387 crio.go:451] Took 3.013322 seconds to extract the tarball
	I0115 10:37:53.837753   46387 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:37:53.876939   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:53.922125   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:53.922161   46387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:37:53.922213   46387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:53.922249   46387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.922267   46387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.922300   46387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.922520   46387 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.922527   46387 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.922544   46387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.922547   46387 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.923794   46387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.923809   46387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.923811   46387 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.923807   46387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.923785   46387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.923843   46387 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.083650   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0115 10:37:54.090328   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.095213   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.123642   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.124012   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.139399   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.139406   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.207117   46387 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0115 10:37:54.207170   46387 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0115 10:37:54.207168   46387 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0115 10:37:54.207202   46387 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.207230   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.207248   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.248774   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.269586   46387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0115 10:37:54.269636   46387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.269661   46387 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0115 10:37:54.269693   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.269693   46387 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.269785   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404758   46387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0115 10:37:54.404862   46387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0115 10:37:54.404907   46387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.404969   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0115 10:37:54.404996   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404873   46387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.405034   46387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0115 10:37:54.405064   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404975   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.405082   46387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.405174   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.405202   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.405149   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.502357   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.502402   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0115 10:37:54.502507   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0115 10:37:54.502547   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.502504   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.502620   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0115 10:37:54.510689   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0115 10:37:54.577797   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0115 10:37:54.577854   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0115 10:37:54.577885   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0115 10:37:54.577945   46387 cache_images.go:92] LoadImages completed in 655.770059ms
	W0115 10:37:54.578019   46387 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0115 10:37:54.578091   46387 ssh_runner.go:195] Run: crio config
	I0115 10:37:51.950759   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.951289   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.951322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.951237   47574 retry.go:31] will retry after 817.063291ms: waiting for machine to come up
	I0115 10:37:52.770506   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:52.771015   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:52.771043   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:52.770977   47574 retry.go:31] will retry after 1.000852987s: waiting for machine to come up
	I0115 10:37:53.774011   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:53.774478   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:53.774518   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:53.774452   47574 retry.go:31] will retry after 1.171113667s: waiting for machine to come up
	I0115 10:37:54.947562   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:54.947925   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:54.947951   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:54.947887   47574 retry.go:31] will retry after 1.982035367s: waiting for machine to come up
	I0115 10:37:54.646104   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:37:54.750728   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:37:54.750754   46387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:37:54.750779   46387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-206509 NodeName:old-k8s-version-206509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 10:37:54.750935   46387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-206509"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-206509
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:37:54.751014   46387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-206509 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:37:54.751063   46387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0115 10:37:54.761568   46387 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:37:54.761645   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:37:54.771892   46387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0115 10:37:54.788678   46387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:37:54.804170   46387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0115 10:37:54.820285   46387 ssh_runner.go:195] Run: grep 192.168.61.70	control-plane.minikube.internal$ /etc/hosts
	I0115 10:37:54.823831   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:54.834806   46387 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509 for IP: 192.168.61.70
	I0115 10:37:54.834838   46387 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:37:54.835023   46387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:37:54.835070   46387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:37:54.835136   46387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/client.key
	I0115 10:37:54.835190   46387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key.99472042
	I0115 10:37:54.835249   46387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key
	I0115 10:37:54.835356   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:37:54.835392   46387 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:37:54.835401   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:37:54.835439   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:37:54.835467   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:37:54.835491   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:37:54.835531   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:54.836204   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:37:54.859160   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 10:37:54.884674   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:37:54.907573   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:37:54.930846   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:37:54.953329   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:37:54.975335   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:37:54.997505   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:37:55.020494   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:37:55.042745   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:37:55.064085   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:37:55.085243   46387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:37:55.101189   46387 ssh_runner.go:195] Run: openssl version
	I0115 10:37:55.106849   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:37:55.118631   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123477   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123545   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.129290   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:37:55.141464   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:37:55.153514   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157901   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157967   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.163557   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:37:55.173419   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:37:55.184850   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189454   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189508   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.194731   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:37:55.205634   46387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:37:55.209881   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:37:55.215521   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:37:55.221031   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:37:55.226730   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:37:55.232566   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:37:55.238251   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:37:55.244098   46387 kubeadm.go:404] StartCluster: {Name:old-k8s-version-206509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:37:55.244188   46387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:37:55.244243   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:37:55.293223   46387 cri.go:89] found id: ""
	I0115 10:37:55.293296   46387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:37:55.305374   46387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:37:55.305403   46387 kubeadm.go:636] restartCluster start
	I0115 10:37:55.305477   46387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:37:55.314925   46387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.316564   46387 kubeconfig.go:92] found "old-k8s-version-206509" server: "https://192.168.61.70:8443"
	I0115 10:37:55.319961   46387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:37:55.329062   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.329148   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.340866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.829433   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.829549   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.843797   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.329336   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.329436   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.343947   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.829507   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.829623   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.843692   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.329438   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.329522   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.341416   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.830063   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.830153   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.844137   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.329648   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.329743   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.342211   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.829792   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.829891   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.842397   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:59.330122   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.330202   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.346667   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.931004   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:56.931428   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:56.931461   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:56.931364   47574 retry.go:31] will retry after 2.358737657s: waiting for machine to come up
	I0115 10:37:59.292322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:59.292784   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:59.292817   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:59.292726   47574 retry.go:31] will retry after 2.808616591s: waiting for machine to come up
	I0115 10:37:59.829162   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.829242   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.844148   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.329799   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.329901   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.345118   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.829706   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.829806   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.845105   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.329598   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.329678   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.341872   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.829350   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.829424   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.843987   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.329874   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.329944   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.342152   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.829617   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.829711   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.841636   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.329206   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.329306   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.341373   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.829987   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.830080   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.842151   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:04.329957   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.330047   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.342133   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.103667   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:02.104098   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:02.104127   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:02.104058   47574 retry.go:31] will retry after 2.823867183s: waiting for machine to come up
	I0115 10:38:04.931219   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:04.931550   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:04.931594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:04.931523   47574 retry.go:31] will retry after 4.042933854s: waiting for machine to come up
	I0115 10:38:04.829477   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.829599   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.841546   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.329351   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:05.329417   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:05.341866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.341892   46387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:05.341900   46387 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:05.341910   46387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:05.342037   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:05.376142   46387 cri.go:89] found id: ""
	I0115 10:38:05.376206   46387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:05.391778   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:05.402262   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:05.402331   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411457   46387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411489   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:05.526442   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.239898   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.449098   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.515862   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.598545   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:06.598653   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.099595   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.599677   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.099492   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.599629   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.627737   46387 api_server.go:72] duration metric: took 2.029196375s to wait for apiserver process to appear ...
	I0115 10:38:08.627766   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:08.627803   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.199201   47063 start.go:369] acquired machines lock for "default-k8s-diff-port-709012" in 3m10.23481312s
	I0115 10:38:10.199261   47063 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:10.199269   47063 fix.go:54] fixHost starting: 
	I0115 10:38:10.199630   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:10.199667   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:10.215225   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0115 10:38:10.215627   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:10.216040   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:10.216068   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:10.216372   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:10.216583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:10.216829   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:10.218454   47063 fix.go:102] recreateIfNeeded on default-k8s-diff-port-709012: state=Stopped err=<nil>
	I0115 10:38:10.218482   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	W0115 10:38:10.218676   47063 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:10.220860   47063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-709012" ...
	I0115 10:38:08.976035   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976545   46584 main.go:141] libmachine: (embed-certs-781270) Found IP for machine: 192.168.72.222
	I0115 10:38:08.976574   46584 main.go:141] libmachine: (embed-certs-781270) Reserving static IP address...
	I0115 10:38:08.976592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has current primary IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976946   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.976980   46584 main.go:141] libmachine: (embed-certs-781270) DBG | skip adding static IP to network mk-embed-certs-781270 - found existing host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"}
	I0115 10:38:08.976997   46584 main.go:141] libmachine: (embed-certs-781270) Reserved static IP address: 192.168.72.222
	I0115 10:38:08.977017   46584 main.go:141] libmachine: (embed-certs-781270) Waiting for SSH to be available...
	I0115 10:38:08.977033   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Getting to WaitForSSH function...
	I0115 10:38:08.979155   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979456   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.979483   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979609   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH client type: external
	I0115 10:38:08.979658   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa (-rw-------)
	I0115 10:38:08.979699   46584 main.go:141] libmachine: (embed-certs-781270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:08.979718   46584 main.go:141] libmachine: (embed-certs-781270) DBG | About to run SSH command:
	I0115 10:38:08.979734   46584 main.go:141] libmachine: (embed-certs-781270) DBG | exit 0
	I0115 10:38:09.082171   46584 main.go:141] libmachine: (embed-certs-781270) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:09.082546   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetConfigRaw
	I0115 10:38:09.083235   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.085481   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.085845   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.085873   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.086115   46584 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/config.json ...
	I0115 10:38:09.086309   46584 machine.go:88] provisioning docker machine ...
	I0115 10:38:09.086331   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.086549   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086714   46584 buildroot.go:166] provisioning hostname "embed-certs-781270"
	I0115 10:38:09.086736   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086884   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.089346   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089702   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.089727   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.090035   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090180   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090319   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.090464   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.090845   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.090862   46584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-781270 && echo "embed-certs-781270" | sudo tee /etc/hostname
	I0115 10:38:09.240609   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-781270
	
	I0115 10:38:09.240643   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.243233   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243586   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.243616   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243764   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.243976   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244292   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.244453   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.244774   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.244800   46584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-781270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-781270/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-781270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:09.388902   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:09.388932   46584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:09.388968   46584 buildroot.go:174] setting up certificates
	I0115 10:38:09.388981   46584 provision.go:83] configureAuth start
	I0115 10:38:09.388998   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.389254   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.392236   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392603   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.392643   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392750   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.395249   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395596   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.395629   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395797   46584 provision.go:138] copyHostCerts
	I0115 10:38:09.395858   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:09.395872   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:09.395939   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:09.396037   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:09.396045   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:09.396067   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:09.396134   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:09.396141   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:09.396159   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:09.396212   46584 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.embed-certs-781270 san=[192.168.72.222 192.168.72.222 localhost 127.0.0.1 minikube embed-certs-781270]
	I0115 10:38:09.457000   46584 provision.go:172] copyRemoteCerts
	I0115 10:38:09.457059   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:09.457081   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.459709   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460074   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.460102   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460356   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.460522   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.460681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.460798   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:09.556211   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:09.578947   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:09.601191   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:09.623814   46584 provision.go:86] duration metric: configureAuth took 234.815643ms
	I0115 10:38:09.623844   46584 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:09.624070   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:09.624157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.626592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.626930   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.626972   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.627141   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.627326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627492   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627607   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.627755   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.628058   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.628086   46584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:09.931727   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:09.931765   46584 machine.go:91] provisioned docker machine in 845.442044ms
	I0115 10:38:09.931777   46584 start.go:300] post-start starting for "embed-certs-781270" (driver="kvm2")
	I0115 10:38:09.931790   46584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:09.931810   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.932100   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:09.932130   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.934487   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934811   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.934836   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934999   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.935160   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.935313   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.935480   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.028971   46584 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:10.032848   46584 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:10.032871   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:10.032955   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:10.033045   46584 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:10.033162   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:10.042133   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:10.064619   46584 start.go:303] post-start completed in 132.827155ms
	I0115 10:38:10.064658   46584 fix.go:56] fixHost completed within 22.492708172s
	I0115 10:38:10.064681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.067323   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067651   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.067675   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067812   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.068037   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068272   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068449   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.068587   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:10.068904   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:10.068919   46584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:10.199025   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315090.148648598
	
	I0115 10:38:10.199045   46584 fix.go:206] guest clock: 1705315090.148648598
	I0115 10:38:10.199053   46584 fix.go:219] Guest: 2024-01-15 10:38:10.148648598 +0000 UTC Remote: 2024-01-15 10:38:10.064662616 +0000 UTC m=+303.401739583 (delta=83.985982ms)
	I0115 10:38:10.199088   46584 fix.go:190] guest clock delta is within tolerance: 83.985982ms
	I0115 10:38:10.199096   46584 start.go:83] releasing machines lock for "embed-certs-781270", held for 22.627192785s
	I0115 10:38:10.199122   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.199368   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:10.201962   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202349   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.202389   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202603   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203417   46584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:10.203461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.203546   46584 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:10.203570   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.206022   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206257   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206371   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206400   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.206673   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206700   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206768   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.206910   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.206911   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.207087   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.207191   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.207335   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.207465   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.327677   46584 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:10.333127   46584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:10.473183   46584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:10.480054   46584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:10.480115   46584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:10.494367   46584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:10.494388   46584 start.go:475] detecting cgroup driver to use...
	I0115 10:38:10.494463   46584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:10.508327   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:10.519950   46584 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:10.520003   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:10.531743   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:10.544980   46584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:10.650002   46584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:10.767145   46584 docker.go:233] disabling docker service ...
	I0115 10:38:10.767214   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:10.782073   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:10.796419   46584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:10.913422   46584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:11.016113   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:11.032638   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:11.053360   46584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:11.053415   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.064008   46584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:11.064067   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.074353   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.084486   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.093962   46584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:11.105487   46584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:11.117411   46584 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:11.117469   46584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:11.133780   46584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:11.145607   46584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:11.257012   46584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:11.437979   46584 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:11.438050   46584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:11.445814   46584 start.go:543] Will wait 60s for crictl version
	I0115 10:38:11.445896   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:38:11.449770   46584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:11.491895   46584 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:11.491985   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.543656   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.609733   46584 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:11.611238   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:11.614594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.614947   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:11.614988   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.615225   46584 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:11.619516   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:11.635101   46584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:11.635170   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:11.675417   46584 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:11.675504   46584 ssh_runner.go:195] Run: which lz4
	I0115 10:38:11.679733   46584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:11.683858   46584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:11.683889   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:13.628977   46387 api_server.go:269] stopped: https://192.168.61.70:8443/healthz: Get "https://192.168.61.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0115 10:38:13.629022   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.222501   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Start
	I0115 10:38:10.222694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring networks are active...
	I0115 10:38:10.223335   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network default is active
	I0115 10:38:10.225164   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network mk-default-k8s-diff-port-709012 is active
	I0115 10:38:10.225189   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Getting domain xml...
	I0115 10:38:10.225201   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Creating domain...
	I0115 10:38:11.529205   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting to get IP...
	I0115 10:38:11.530265   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530808   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530886   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.530786   47689 retry.go:31] will retry after 220.836003ms: waiting for machine to come up
	I0115 10:38:11.753500   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754152   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.754119   47689 retry.go:31] will retry after 288.710195ms: waiting for machine to come up
	I0115 10:38:12.044613   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045149   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045179   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.045065   47689 retry.go:31] will retry after 321.962888ms: waiting for machine to come up
	I0115 10:38:12.368694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369119   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369171   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.369075   47689 retry.go:31] will retry after 457.128837ms: waiting for machine to come up
	I0115 10:38:12.827574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828079   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828108   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.828011   47689 retry.go:31] will retry after 524.042929ms: waiting for machine to come up
	I0115 10:38:13.353733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354288   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354315   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:13.354237   47689 retry.go:31] will retry after 885.937378ms: waiting for machine to come up
	I0115 10:38:14.241653   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242258   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:14.242185   47689 retry.go:31] will retry after 1.168061338s: waiting for machine to come up
	I0115 10:38:14.984346   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:14.984377   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:14.984395   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.129596   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:15.129627   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:15.129650   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.224825   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.224852   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:15.628377   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.666573   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.666642   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:16.128080   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:16.148642   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:38:16.156904   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:38:16.156927   46387 api_server.go:131] duration metric: took 7.529154555s to wait for apiserver health ...
	I0115 10:38:16.156936   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:38:16.156942   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:16.159248   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:13.665699   46584 crio.go:444] Took 1.986003 seconds to copy over tarball
	I0115 10:38:13.665769   46584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:16.702911   46584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.037102789s)
	I0115 10:38:16.702954   46584 crio.go:451] Took 3.037230 seconds to extract the tarball
	I0115 10:38:16.702966   46584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:16.160810   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:16.173072   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:16.205009   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:16.216599   46387 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:16.216637   46387 system_pods.go:61] "coredns-5644d7b6d9-5qcrz" [3fc31c2b-9c3f-4167-8b3f-bbe262591a90] Running
	I0115 10:38:16.216645   46387 system_pods.go:61] "coredns-5644d7b6d9-rgrbc" [1c2c2a33-f329-4cb3-8e05-900a252ceed3] Running
	I0115 10:38:16.216651   46387 system_pods.go:61] "etcd-old-k8s-version-206509" [8c2919cc-4b82-4387-be0d-f3decf4b324b] Running
	I0115 10:38:16.216658   46387 system_pods.go:61] "kube-apiserver-old-k8s-version-206509" [51e63cf2-5728-471d-b447-3f3aa9454ac7] Running
	I0115 10:38:16.216663   46387 system_pods.go:61] "kube-controller-manager-old-k8s-version-206509" [6dec6bf0-ce5d-4f87-8bf7-c774214eb8ea] Running
	I0115 10:38:16.216668   46387 system_pods.go:61] "kube-proxy-w9fdn" [42b28054-8876-4854-a041-62be5688c1c2] Running
	I0115 10:38:16.216675   46387 system_pods.go:61] "kube-scheduler-old-k8s-version-206509" [7a50352c-2129-4de4-84e8-3cb5d8ccd463] Running
	I0115 10:38:16.216681   46387 system_pods.go:61] "storage-provisioner" [f341413b-8261-4a78-9f28-449be173cf19] Running
	I0115 10:38:16.216690   46387 system_pods.go:74] duration metric: took 11.655731ms to wait for pod list to return data ...
	I0115 10:38:16.216703   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:16.220923   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:16.220962   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:16.220978   46387 node_conditions.go:105] duration metric: took 4.267954ms to run NodePressure ...
	I0115 10:38:16.221005   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:16.519042   46387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:16.523772   46387 retry.go:31] will retry after 264.775555ms: kubelet not initialised
	I0115 10:38:17.172203   46387 retry.go:31] will retry after 553.077445ms: kubelet not initialised
	I0115 10:38:18.053202   46387 retry.go:31] will retry after 653.279352ms: kubelet not initialised
	I0115 10:38:18.837753   46387 retry.go:31] will retry after 692.673954ms: kubelet not initialised
	I0115 10:38:19.596427   46387 retry.go:31] will retry after 679.581071ms: kubelet not initialised
	I0115 10:38:15.412204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412706   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412766   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:15.412670   47689 retry.go:31] will retry after 895.041379ms: waiting for machine to come up
	I0115 10:38:16.309188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309764   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:16.309692   47689 retry.go:31] will retry after 1.593821509s: waiting for machine to come up
	I0115 10:38:17.904625   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905131   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905168   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:17.905073   47689 retry.go:31] will retry after 2.002505122s: waiting for machine to come up
	I0115 10:38:16.745093   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:17.184204   46584 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:17.184235   46584 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:17.184325   46584 ssh_runner.go:195] Run: crio config
	I0115 10:38:17.249723   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:17.249748   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:17.249764   46584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:17.249782   46584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.222 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-781270 NodeName:embed-certs-781270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:17.249936   46584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-781270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:17.250027   46584 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-781270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:38:17.250091   46584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:17.262237   46584 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:17.262313   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:17.273370   46584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0115 10:38:17.292789   46584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:17.312254   46584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0115 10:38:17.332121   46584 ssh_runner.go:195] Run: grep 192.168.72.222	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:17.336199   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:17.349009   46584 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270 for IP: 192.168.72.222
	I0115 10:38:17.349047   46584 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:17.349200   46584 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:17.349246   46584 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:17.349316   46584 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/client.key
	I0115 10:38:17.685781   46584 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key.4e007618
	I0115 10:38:17.685874   46584 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key
	I0115 10:38:17.685990   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:17.686022   46584 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:17.686033   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:17.686054   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:17.686085   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:17.686107   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:17.686147   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:17.686866   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:17.713652   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:17.744128   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:17.771998   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:17.796880   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:17.822291   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:17.848429   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:17.874193   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:17.898873   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:17.922742   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:17.945123   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:17.967188   46584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:17.983237   46584 ssh_runner.go:195] Run: openssl version
	I0115 10:38:17.988658   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:17.998141   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002462   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002521   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.008136   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:18.017766   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:18.027687   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032418   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032479   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.038349   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:18.048395   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:18.058675   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063369   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063441   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.068886   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:18.078459   46584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:18.083181   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:18.089264   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:18.095399   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:18.101292   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:18.107113   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:18.112791   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:18.118337   46584 kubeadm.go:404] StartCluster: {Name:embed-certs-781270 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:18.118561   46584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:18.118611   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:18.162363   46584 cri.go:89] found id: ""
	I0115 10:38:18.162454   46584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:18.172261   46584 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:18.172286   46584 kubeadm.go:636] restartCluster start
	I0115 10:38:18.172357   46584 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:18.181043   46584 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.182845   46584 kubeconfig.go:92] found "embed-certs-781270" server: "https://192.168.72.222:8443"
	I0115 10:38:18.186506   46584 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:18.194997   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.195069   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.205576   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.695105   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.695200   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.709836   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.195362   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.195533   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.210585   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.695088   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.695201   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.710436   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.196063   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.196145   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.211948   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.695433   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.695545   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.710981   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.195510   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.195588   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.206769   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.695111   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.695192   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.706765   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.288898   46387 retry.go:31] will retry after 1.97886626s: kubelet not initialised
	I0115 10:38:22.273756   46387 retry.go:31] will retry after 2.35083465s: kubelet not initialised
	I0115 10:38:19.909015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909598   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:19.909539   47689 retry.go:31] will retry after 2.883430325s: waiting for machine to come up
	I0115 10:38:22.794280   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794702   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794729   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:22.794660   47689 retry.go:31] will retry after 3.219865103s: waiting for machine to come up
	I0115 10:38:22.195343   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.195454   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.210740   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:22.695835   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.695900   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.710247   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.195555   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.195633   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.207117   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.695569   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.695632   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.706867   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.195323   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.195428   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.207679   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.695971   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.696049   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.708342   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.195900   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.195994   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.207896   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.695417   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.695490   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.706180   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.195799   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.195890   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.206859   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.695558   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.695648   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.706652   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.630486   46387 retry.go:31] will retry after 5.638904534s: kubelet not initialised
	I0115 10:38:26.016121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016496   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016520   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:26.016463   47689 retry.go:31] will retry after 3.426285557s: waiting for machine to come up
	I0115 10:38:29.447165   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447643   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Found IP for machine: 192.168.39.125
	I0115 10:38:29.447678   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has current primary IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447719   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserving static IP address...
	I0115 10:38:29.448146   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.448172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | skip adding static IP to network mk-default-k8s-diff-port-709012 - found existing host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"}
	I0115 10:38:29.448183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserved static IP address: 192.168.39.125
	I0115 10:38:29.448204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for SSH to be available...
	I0115 10:38:29.448215   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Getting to WaitForSSH function...
	I0115 10:38:29.450376   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450690   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.450715   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH client type: external
	I0115 10:38:29.450867   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa (-rw-------)
	I0115 10:38:29.450899   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:29.450909   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | About to run SSH command:
	I0115 10:38:29.450919   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | exit 0
	I0115 10:38:29.550560   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:29.550940   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetConfigRaw
	I0115 10:38:29.551686   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.554629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555085   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.555117   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555426   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:38:29.555642   47063 machine.go:88] provisioning docker machine ...
	I0115 10:38:29.555672   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:29.555875   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556053   47063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-709012"
	I0115 10:38:29.556076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556217   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.558493   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.558804   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.558835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.559018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.559209   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559363   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.559677   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.560009   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.560028   47063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-709012 && echo "default-k8s-diff-port-709012" | sudo tee /etc/hostname
	I0115 10:38:29.706028   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-709012
	
	I0115 10:38:29.706059   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.708893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.709343   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709409   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.709631   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709789   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709938   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.710121   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.710473   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.710501   47063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-709012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-709012/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-709012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:29.845884   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:29.845916   47063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:29.845938   47063 buildroot.go:174] setting up certificates
	I0115 10:38:29.845953   47063 provision.go:83] configureAuth start
	I0115 10:38:29.845973   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.846293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.849072   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.849558   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849755   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.852196   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852548   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.852574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852664   47063 provision.go:138] copyHostCerts
	I0115 10:38:29.852716   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:29.852726   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:29.852778   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:29.852870   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:29.852877   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:29.852896   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:29.852957   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:29.852964   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:29.852981   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:29.853031   47063 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-709012 san=[192.168.39.125 192.168.39.125 localhost 127.0.0.1 minikube default-k8s-diff-port-709012]
	I0115 10:38:30.777181   46388 start.go:369] acquired machines lock for "no-preload-824502" in 58.676870352s
	I0115 10:38:30.777252   46388 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:30.777263   46388 fix.go:54] fixHost starting: 
	I0115 10:38:30.777697   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:30.777733   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:30.795556   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0115 10:38:30.795931   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:30.796387   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:38:30.796417   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:30.796825   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:30.797001   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:30.797164   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:38:30.798953   46388 fix.go:102] recreateIfNeeded on no-preload-824502: state=Stopped err=<nil>
	I0115 10:38:30.798978   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	W0115 10:38:30.799146   46388 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:30.800981   46388 out.go:177] * Restarting existing kvm2 VM for "no-preload-824502" ...
	I0115 10:38:27.195033   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.195128   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.205968   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:27.695992   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.696075   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.707112   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.195726   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:28.195798   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:28.206794   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.206837   46584 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:28.206846   46584 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:28.206858   46584 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:28.206917   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:28.256399   46584 cri.go:89] found id: ""
	I0115 10:38:28.256468   46584 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:28.272234   46584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:28.281359   46584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:28.281439   46584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290385   46584 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290431   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:28.417681   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.012673   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.212322   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.296161   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.378870   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:29.378965   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.879587   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.379077   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.879281   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:31.379626   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.951966   47063 provision.go:172] copyRemoteCerts
	I0115 10:38:29.952019   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:29.952040   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.954784   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955082   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.955104   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955285   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.955466   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.955649   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.955793   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.057077   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:30.081541   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0115 10:38:30.109962   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:30.140809   47063 provision.go:86] duration metric: configureAuth took 294.836045ms
	I0115 10:38:30.140840   47063 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:30.141071   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:30.141167   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.144633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.144975   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.145015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.145177   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.145378   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145539   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145703   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.145927   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.146287   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.146310   47063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:30.484993   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:30.485022   47063 machine.go:91] provisioned docker machine in 929.358403ms
	I0115 10:38:30.485035   47063 start.go:300] post-start starting for "default-k8s-diff-port-709012" (driver="kvm2")
	I0115 10:38:30.485049   47063 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:30.485067   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.485390   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:30.485431   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.488115   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488473   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.488512   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.488837   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.489018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.489171   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.590174   47063 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:30.594879   47063 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:30.594907   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:30.594974   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:30.595069   47063 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:30.595183   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:30.604525   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:30.631240   47063 start.go:303] post-start completed in 146.190685ms
	I0115 10:38:30.631270   47063 fix.go:56] fixHost completed within 20.431996373s
	I0115 10:38:30.631293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.634188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634544   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.634577   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634807   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.635014   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635185   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.635574   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.636012   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.636032   47063 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:30.777043   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315110.724251584
	
	I0115 10:38:30.777069   47063 fix.go:206] guest clock: 1705315110.724251584
	I0115 10:38:30.777079   47063 fix.go:219] Guest: 2024-01-15 10:38:30.724251584 +0000 UTC Remote: 2024-01-15 10:38:30.631274763 +0000 UTC m=+210.817197544 (delta=92.976821ms)
	I0115 10:38:30.777107   47063 fix.go:190] guest clock delta is within tolerance: 92.976821ms
	I0115 10:38:30.777114   47063 start.go:83] releasing machines lock for "default-k8s-diff-port-709012", held for 20.577876265s
	I0115 10:38:30.777143   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.777406   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:30.780611   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781041   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.781076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781250   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.781876   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782186   47063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:30.782240   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.782295   47063 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:30.782321   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.785597   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786228   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.786255   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786386   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786698   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.786881   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.787023   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.787078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.787095   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.787204   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.787774   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.787930   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.788121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.788345   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.919659   47063 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:30.926237   47063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:31.076313   47063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:31.085010   47063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:31.085087   47063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:31.104237   47063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:31.104265   47063 start.go:475] detecting cgroup driver to use...
	I0115 10:38:31.104331   47063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:31.124044   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:31.139494   47063 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:31.139581   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:31.154894   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:31.172458   47063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:31.307400   47063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:31.496675   47063 docker.go:233] disabling docker service ...
	I0115 10:38:31.496733   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:31.513632   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:31.526228   47063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:31.681556   47063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:31.816489   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:31.831193   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:31.853530   47063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:31.853602   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.864559   47063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:31.864661   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.875384   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.888460   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.904536   47063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:31.915622   47063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:31.929209   47063 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:31.929266   47063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:31.948691   47063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:31.959872   47063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:32.102988   47063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:32.300557   47063 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:32.300632   47063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:32.305636   47063 start.go:543] Will wait 60s for crictl version
	I0115 10:38:32.305691   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:38:32.309883   47063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:32.354459   47063 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:32.354594   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.402443   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.463150   47063 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:30.802324   46388 main.go:141] libmachine: (no-preload-824502) Calling .Start
	I0115 10:38:30.802525   46388 main.go:141] libmachine: (no-preload-824502) Ensuring networks are active...
	I0115 10:38:30.803127   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network default is active
	I0115 10:38:30.803476   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network mk-no-preload-824502 is active
	I0115 10:38:30.803799   46388 main.go:141] libmachine: (no-preload-824502) Getting domain xml...
	I0115 10:38:30.804452   46388 main.go:141] libmachine: (no-preload-824502) Creating domain...
	I0115 10:38:32.173614   46388 main.go:141] libmachine: (no-preload-824502) Waiting to get IP...
	I0115 10:38:32.174650   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.175113   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.175211   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.175106   47808 retry.go:31] will retry after 275.127374ms: waiting for machine to come up
	I0115 10:38:32.451595   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.452150   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.452183   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.452095   47808 retry.go:31] will retry after 258.80121ms: waiting for machine to come up
	I0115 10:38:32.712701   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.713348   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.713531   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.713459   47808 retry.go:31] will retry after 440.227123ms: waiting for machine to come up
	I0115 10:38:33.155845   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.156595   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.156625   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.156500   47808 retry.go:31] will retry after 428.795384ms: waiting for machine to come up
	I0115 10:38:33.587781   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.588169   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.588190   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.588118   47808 retry.go:31] will retry after 720.536787ms: waiting for machine to come up
	I0115 10:38:34.310098   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:34.310640   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:34.310674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:34.310604   47808 retry.go:31] will retry after 841.490959ms: waiting for machine to come up
	I0115 10:38:30.274782   46387 retry.go:31] will retry after 7.853808987s: kubelet not initialised
	I0115 10:38:32.464592   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:32.467583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.467962   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:32.467993   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.468218   47063 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:32.472463   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:32.488399   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:32.488488   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:32.535645   47063 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:32.535776   47063 ssh_runner.go:195] Run: which lz4
	I0115 10:38:32.541468   47063 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:32.547264   47063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:32.547297   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:34.427435   47063 crio.go:444] Took 1.886019 seconds to copy over tarball
	I0115 10:38:34.427510   47063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:31.879639   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.379656   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.408694   46584 api_server.go:72] duration metric: took 3.029823539s to wait for apiserver process to appear ...
	I0115 10:38:32.408737   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:32.408760   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.614020   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:36.614053   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:36.614068   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.687561   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.687606   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.134400   46387 retry.go:31] will retry after 7.988567077s: kubelet not initialised
	I0115 10:38:35.154196   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:35.154644   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:35.154674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:35.154615   47808 retry.go:31] will retry after 1.099346274s: waiting for machine to come up
	I0115 10:38:36.255575   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:36.256111   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:36.256151   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:36.256038   47808 retry.go:31] will retry after 1.294045748s: waiting for machine to come up
	I0115 10:38:37.551734   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:37.552569   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:37.552593   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:37.552527   47808 retry.go:31] will retry after 1.720800907s: waiting for machine to come up
	I0115 10:38:39.275250   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:39.275651   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:39.275684   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:39.275595   47808 retry.go:31] will retry after 1.914509744s: waiting for machine to come up
	I0115 10:38:37.765711   47063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.338169875s)
	I0115 10:38:37.765741   47063 crio.go:451] Took 3.338279 seconds to extract the tarball
	I0115 10:38:37.765753   47063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:37.807016   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:37.858151   47063 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:37.858195   47063 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:37.858295   47063 ssh_runner.go:195] Run: crio config
	I0115 10:38:37.933830   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:37.933851   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:37.933872   47063 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:37.933896   47063 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-709012 NodeName:default-k8s-diff-port-709012 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:37.934040   47063 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-709012"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:37.934132   47063 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-709012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0115 10:38:37.934202   47063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:37.945646   47063 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:37.945728   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:37.957049   47063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0115 10:38:37.978770   47063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:37.995277   47063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0115 10:38:38.012964   47063 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:38.016803   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:38.028708   47063 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012 for IP: 192.168.39.125
	I0115 10:38:38.028740   47063 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:38.028887   47063 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:38.028926   47063 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:38.028988   47063 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/client.key
	I0115 10:38:38.048801   47063 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key.657bd91f
	I0115 10:38:38.048895   47063 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key
	I0115 10:38:38.049019   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:38.049058   47063 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:38.049075   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:38.049110   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:38.049149   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:38.049183   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:38.049241   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:38.049848   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:38.078730   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:38.102069   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:38.124278   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:38.150354   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:38.173703   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:38.201758   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:38.227016   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:38.249876   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:38.271859   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:38.294051   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:38.316673   47063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:38.335128   47063 ssh_runner.go:195] Run: openssl version
	I0115 10:38:38.342574   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:38.355889   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361805   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361871   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.369192   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:38.381493   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:38.391714   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396728   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396787   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.402624   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:38.413957   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:38.425258   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430627   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430697   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.440362   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:38.453323   47063 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:38.458803   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:38.465301   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:38.471897   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:38.478274   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:38.484890   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:38.490909   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:38.496868   47063 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:38.496966   47063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:38.497015   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:38.539389   47063 cri.go:89] found id: ""
	I0115 10:38:38.539475   47063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:38.550998   47063 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:38.551020   47063 kubeadm.go:636] restartCluster start
	I0115 10:38:38.551076   47063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:38.561885   47063 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:38.563439   47063 kubeconfig.go:92] found "default-k8s-diff-port-709012" server: "https://192.168.39.125:8444"
	I0115 10:38:38.566482   47063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:38.576458   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:38.576521   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:38.588702   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.077323   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.077407   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.089885   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.577363   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.577441   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.591111   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:36.909069   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.917556   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.917594   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.409134   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.417305   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.417348   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.909251   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.916788   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.916824   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.409535   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:38.416538   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:38.416572   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.908929   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.863238   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.863279   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.863294   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.869897   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.869922   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.909113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.065422   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:40.065467   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:40.408921   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.414320   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:38:40.424348   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:40.424378   46584 api_server.go:131] duration metric: took 8.015632919s to wait for apiserver health ...
	I0115 10:38:40.424390   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:40.424398   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:40.426615   46584 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:40.427979   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:40.450675   46584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:40.478174   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:40.492540   46584 system_pods.go:59] 9 kube-system pods found
	I0115 10:38:40.492582   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492593   46584 system_pods.go:61] "coredns-5dd5756b68-w4p2z" [87d362df-5c29-4a04-b44f-c502cf6849bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492609   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:40.492619   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:40.492633   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:40.492646   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:40.492658   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:40.492671   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:40.492687   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:40.492700   46584 system_pods.go:74] duration metric: took 14.502202ms to wait for pod list to return data ...
	I0115 10:38:40.492715   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:40.496471   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:40.496504   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:40.496517   46584 node_conditions.go:105] duration metric: took 3.794528ms to run NodePressure ...
	I0115 10:38:40.496538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:40.770732   46584 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777051   46584 kubeadm.go:787] kubelet initialised
	I0115 10:38:40.777118   46584 kubeadm.go:788] duration metric: took 6.307286ms waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777139   46584 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:40.784605   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.798293   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798365   46584 pod_ready.go:81] duration metric: took 13.654765ms waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.798389   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798402   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.807236   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807276   46584 pod_ready.go:81] duration metric: took 8.862426ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.807289   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807297   46584 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.813904   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813932   46584 pod_ready.go:81] duration metric: took 6.62492ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.813944   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813951   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.882407   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882458   46584 pod_ready.go:81] duration metric: took 68.496269ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.882472   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882485   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.282123   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282158   46584 pod_ready.go:81] duration metric: took 399.656962ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.282172   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282181   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.683979   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684007   46584 pod_ready.go:81] duration metric: took 401.816493ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.684017   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684023   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.082465   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082490   46584 pod_ready.go:81] duration metric: took 398.460424ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.082501   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082509   46584 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.484454   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484490   46584 pod_ready.go:81] duration metric: took 401.970108ms waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.484504   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484513   46584 pod_ready.go:38] duration metric: took 1.707353329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:42.484534   46584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:42.499693   46584 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:42.499715   46584 kubeadm.go:640] restartCluster took 24.327423485s
	I0115 10:38:42.499733   46584 kubeadm.go:406] StartCluster complete in 24.381392225s
	I0115 10:38:42.499752   46584 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.499817   46584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:42.502965   46584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.503219   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:42.503253   46584 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:42.503356   46584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-781270"
	I0115 10:38:42.503374   46584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-781270"
	I0115 10:38:42.503383   46584 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-781270"
	I0115 10:38:42.503395   46584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-781270"
	W0115 10:38:42.503402   46584 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:42.503451   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:42.503493   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503504   46584 addons.go:69] Setting metrics-server=true in profile "embed-certs-781270"
	I0115 10:38:42.503520   46584 addons.go:234] Setting addon metrics-server=true in "embed-certs-781270"
	W0115 10:38:42.503533   46584 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:42.503577   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503826   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503850   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503855   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503871   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503895   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503924   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.522809   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0115 10:38:42.523025   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0115 10:38:42.523163   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0115 10:38:42.523260   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523382   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523755   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523861   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.523990   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524323   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524345   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524415   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524585   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524605   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524825   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524992   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525017   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525375   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525412   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525571   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.525747   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.528762   46584 addons.go:234] Setting addon default-storageclass=true in "embed-certs-781270"
	W0115 10:38:42.528781   46584 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:42.528807   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.529117   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.529140   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.544693   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I0115 10:38:42.545013   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0115 10:38:42.545528   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.545628   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.546235   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546265   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546268   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546280   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546650   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546687   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546820   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.546918   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I0115 10:38:42.547068   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.547459   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.548255   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.548269   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.548859   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.549002   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.549393   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.549415   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.549597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.551555   46584 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:42.552918   46584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:42.554551   46584 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.554573   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:42.554591   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.554552   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:42.554648   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:42.554662   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.561284   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.561706   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.561854   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.562023   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.562123   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.562179   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.562229   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.564058   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564432   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.564529   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564798   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.564977   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.565148   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.565282   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.570688   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0115 10:38:42.571242   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.571724   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.571749   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.571989   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.572135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.573685   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.573936   46584 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.573952   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:42.573969   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.576948   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577272   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.577312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577680   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.577866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.577988   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.578134   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.687267   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:42.687293   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:42.707058   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:42.707083   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:42.727026   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.745278   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.777425   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:42.777450   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:42.779528   46584 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:42.832109   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:43.011971   46584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-781270" context rescaled to 1 replicas
	I0115 10:38:43.012022   46584 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:43.014704   46584 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:43.016005   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:44.039814   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.294486297s)
	I0115 10:38:44.039891   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312831152s)
	I0115 10:38:44.039895   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039928   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039946   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040024   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040264   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040283   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040293   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040302   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040412   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040427   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040451   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040613   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040744   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040750   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040755   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040791   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040800   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054113   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.054134   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.054409   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.054454   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054469   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.151470   46584 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.135429651s)
	I0115 10:38:44.151517   46584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:44.151560   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319411531s)
	I0115 10:38:44.151601   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.151626   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.151954   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.151973   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152001   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.152012   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.152312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.152317   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.152328   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152338   46584 addons.go:470] Verifying addon metrics-server=true in "embed-certs-781270"
	I0115 10:38:44.155687   46584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:41.191855   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:41.192271   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:41.192310   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:41.192239   47808 retry.go:31] will retry after 2.364591434s: waiting for machine to come up
	I0115 10:38:43.560150   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:43.560624   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:43.560648   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:43.560581   47808 retry.go:31] will retry after 3.204170036s: waiting for machine to come up
	I0115 10:38:40.076788   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.076875   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.089217   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:40.577351   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.577448   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.593294   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.076625   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.076730   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.092700   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.577413   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.577513   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.592266   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.076755   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.076862   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.090411   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.576920   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.576982   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.589590   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.077312   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.077410   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.089732   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.576781   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.576857   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.592414   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.076854   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.076922   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.089009   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.576614   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.576713   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.592137   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.157450   46584 addons.go:505] enable addons completed in 1.654202196s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:38:46.156830   46584 node_ready.go:58] node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:46.129496   46387 retry.go:31] will retry after 7.881779007s: kubelet not initialised
	I0115 10:38:46.766674   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:46.767050   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:46.767072   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:46.767013   47808 retry.go:31] will retry after 3.09324278s: waiting for machine to come up
	I0115 10:38:45.076819   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.076882   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.092624   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:45.576654   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.576724   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.590306   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.076821   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.076920   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.090883   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.577506   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.577590   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.590379   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.076909   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.076997   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.088742   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.577287   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.577371   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.589014   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.076538   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.076608   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.088956   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.576474   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.576573   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.588122   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.588146   47063 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:48.588153   47063 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:48.588162   47063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:48.588214   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:48.631367   47063 cri.go:89] found id: ""
	I0115 10:38:48.631441   47063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:48.648653   47063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:48.657948   47063 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:48.658017   47063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668103   47063 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668124   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:48.787890   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.559039   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.767002   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.842165   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:47.155176   46584 node_ready.go:49] node "embed-certs-781270" has status "Ready":"True"
	I0115 10:38:47.155200   46584 node_ready.go:38] duration metric: took 3.003671558s waiting for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:47.155212   46584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:47.162248   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:49.169885   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:51.190513   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:49.864075   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864515   46388 main.go:141] libmachine: (no-preload-824502) Found IP for machine: 192.168.50.136
	I0115 10:38:49.864538   46388 main.go:141] libmachine: (no-preload-824502) Reserving static IP address...
	I0115 10:38:49.864554   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has current primary IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864990   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.865034   46388 main.go:141] libmachine: (no-preload-824502) DBG | skip adding static IP to network mk-no-preload-824502 - found existing host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"}
	I0115 10:38:49.865052   46388 main.go:141] libmachine: (no-preload-824502) Reserved static IP address: 192.168.50.136
	I0115 10:38:49.865073   46388 main.go:141] libmachine: (no-preload-824502) Waiting for SSH to be available...
	I0115 10:38:49.865115   46388 main.go:141] libmachine: (no-preload-824502) DBG | Getting to WaitForSSH function...
	I0115 10:38:49.867410   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867671   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.867708   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867864   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH client type: external
	I0115 10:38:49.867924   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa (-rw-------)
	I0115 10:38:49.867961   46388 main.go:141] libmachine: (no-preload-824502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:49.867983   46388 main.go:141] libmachine: (no-preload-824502) DBG | About to run SSH command:
	I0115 10:38:49.867994   46388 main.go:141] libmachine: (no-preload-824502) DBG | exit 0
	I0115 10:38:49.966638   46388 main.go:141] libmachine: (no-preload-824502) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:49.967072   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetConfigRaw
	I0115 10:38:49.967925   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:49.970409   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.970811   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.970846   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.971099   46388 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/config.json ...
	I0115 10:38:49.971300   46388 machine.go:88] provisioning docker machine ...
	I0115 10:38:49.971327   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:49.971561   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971757   46388 buildroot.go:166] provisioning hostname "no-preload-824502"
	I0115 10:38:49.971783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971970   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:49.974279   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974723   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.974758   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974917   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:49.975088   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975247   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975460   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:49.975640   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:49.976081   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:49.976099   46388 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-824502 && echo "no-preload-824502" | sudo tee /etc/hostname
	I0115 10:38:50.121181   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-824502
	
	I0115 10:38:50.121206   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.123821   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124162   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.124194   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124371   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.124588   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124788   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124940   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.125103   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.125410   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.125429   46388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-824502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-824502/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-824502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:50.259649   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:50.259680   46388 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:50.259710   46388 buildroot.go:174] setting up certificates
	I0115 10:38:50.259724   46388 provision.go:83] configureAuth start
	I0115 10:38:50.259736   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:50.260022   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:50.262296   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262683   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.262704   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262848   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.265340   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265715   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.265743   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265885   46388 provision.go:138] copyHostCerts
	I0115 10:38:50.265942   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:50.265953   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:50.266025   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:50.266128   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:50.266143   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:50.266178   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:50.266258   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:50.266268   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:50.266296   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:50.266359   46388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.no-preload-824502 san=[192.168.50.136 192.168.50.136 localhost 127.0.0.1 minikube no-preload-824502]
	I0115 10:38:50.666513   46388 provision.go:172] copyRemoteCerts
	I0115 10:38:50.666584   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:50.666615   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.669658   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670109   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.670162   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670410   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.670632   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.670812   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.671067   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:50.774944   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:50.799533   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:50.824210   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 10:38:50.849191   46388 provision.go:86] duration metric: configureAuth took 589.452836ms
	I0115 10:38:50.849224   46388 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:50.849455   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:38:50.849560   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.852884   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853291   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.853346   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853508   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.853746   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.853936   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.854105   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.854244   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.854708   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.854735   46388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:51.246971   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:51.246997   46388 machine.go:91] provisioned docker machine in 1.275679147s
	I0115 10:38:51.247026   46388 start.go:300] post-start starting for "no-preload-824502" (driver="kvm2")
	I0115 10:38:51.247043   46388 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:51.247063   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.247450   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:51.247481   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.250477   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250751   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.250783   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250951   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.251085   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.251227   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.251308   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.350552   46388 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:51.355893   46388 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:51.355918   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:51.355994   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:51.356096   46388 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:51.356220   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:51.366598   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:51.393765   46388 start.go:303] post-start completed in 146.702407ms
	I0115 10:38:51.393798   46388 fix.go:56] fixHost completed within 20.616533939s
	I0115 10:38:51.393826   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.396990   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397531   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.397563   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397785   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.398006   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398190   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398367   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.398602   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:51.399038   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:51.399057   46388 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:51.532940   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315131.477577825
	
	I0115 10:38:51.532962   46388 fix.go:206] guest clock: 1705315131.477577825
	I0115 10:38:51.532971   46388 fix.go:219] Guest: 2024-01-15 10:38:51.477577825 +0000 UTC Remote: 2024-01-15 10:38:51.393803771 +0000 UTC m=+361.851018624 (delta=83.774054ms)
	I0115 10:38:51.533006   46388 fix.go:190] guest clock delta is within tolerance: 83.774054ms
	I0115 10:38:51.533011   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 20.755785276s
	I0115 10:38:51.533031   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.533296   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:51.536532   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537167   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.537206   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537411   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538058   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538236   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538395   46388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:51.538461   46388 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:51.538485   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.538492   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.541387   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541419   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541791   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541836   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541878   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541952   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.541959   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.542137   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542219   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.542317   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542396   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542477   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.542535   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542697   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.668594   46388 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:51.675328   46388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:51.822660   46388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:51.830242   46388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:51.830318   46388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:51.846032   46388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:51.846067   46388 start.go:475] detecting cgroup driver to use...
	I0115 10:38:51.846147   46388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:51.863608   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:51.875742   46388 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:51.875810   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:51.888307   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:51.902327   46388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:52.027186   46388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:52.170290   46388 docker.go:233] disabling docker service ...
	I0115 10:38:52.170372   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:52.184106   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:52.195719   46388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:52.304630   46388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:52.420312   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:52.434213   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:52.453883   46388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:52.453946   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.464662   46388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:52.464726   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.474291   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.483951   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.493132   46388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:52.503668   46388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:52.512336   46388 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:52.512410   46388 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:52.529602   46388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:52.541735   46388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:52.664696   46388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:52.844980   46388 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:52.845051   46388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:52.850380   46388 start.go:543] Will wait 60s for crictl version
	I0115 10:38:52.850463   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:52.854500   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:52.890488   46388 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:52.890595   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:52.944999   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:53.005494   46388 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0115 10:38:54.017897   46387 retry.go:31] will retry after 11.956919729s: kubelet not initialised
	I0115 10:38:53.006783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:53.009509   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.009903   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:53.009934   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.010135   46388 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:53.014612   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:53.029014   46388 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 10:38:53.029063   46388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:53.073803   46388 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0115 10:38:53.073839   46388 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:38:53.073906   46388 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.073943   46388 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.073979   46388 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.073945   46388 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.073914   46388 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.073932   46388 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.073931   46388 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.073918   46388 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075357   46388 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.075478   46388 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.075515   46388 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.075532   46388 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.075482   46388 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.075483   46388 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.234170   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.248000   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0115 10:38:53.264387   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.289576   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.303961   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.326078   46388 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0115 10:38:53.326132   46388 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.326176   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.331268   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.334628   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.366099   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.426012   46388 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0115 10:38:53.426058   46388 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.426106   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.426316   46388 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0115 10:38:53.426346   46388 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.426377   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505102   46388 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0115 10:38:53.505194   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.505201   46388 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.505286   46388 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0115 10:38:53.505358   46388 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.505410   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505334   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.507596   46388 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0115 10:38:53.507630   46388 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.507674   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.544052   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.544142   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.544078   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.544263   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.544458   46388 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0115 10:38:53.544505   46388 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.544550   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.568682   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0115 10:38:53.568786   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.568808   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.681576   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681671   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0115 10:38:53.681777   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:53.681779   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681918   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.681990   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.682040   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0115 10:38:53.682108   46388 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681996   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.682157   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681927   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0115 10:38:53.682277   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:53.728102   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:53.728204   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:49.944443   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:49.944529   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.445085   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.945395   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.444784   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.944622   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.444886   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.460961   47063 api_server.go:72] duration metric: took 2.516519088s to wait for apiserver process to appear ...
	I0115 10:38:52.460980   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:52.460996   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:52.461498   47063 api_server.go:269] stopped: https://192.168.39.125:8444/healthz: Get "https://192.168.39.125:8444/healthz": dial tcp 192.168.39.125:8444: connect: connection refused
	I0115 10:38:52.961968   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:53.672555   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:55.685156   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:56.172493   46584 pod_ready.go:92] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.172521   46584 pod_ready.go:81] duration metric: took 9.010249071s waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.172534   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.178057   46584 pod_ready.go:97] error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178080   46584 pod_ready.go:81] duration metric: took 5.538491ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:56.178092   46584 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178100   46584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185048   46584 pod_ready.go:92] pod "etcd-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.185071   46584 pod_ready.go:81] duration metric: took 6.962528ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185082   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190244   46584 pod_ready.go:92] pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.190263   46584 pod_ready.go:81] duration metric: took 5.173778ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190275   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196537   46584 pod_ready.go:92] pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.196555   46584 pod_ready.go:81] duration metric: took 6.272551ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196566   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367735   46584 pod_ready.go:92] pod "kube-proxy-jqgfc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.367766   46584 pod_ready.go:81] duration metric: took 171.191874ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367779   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.209201   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.209232   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.209247   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.283870   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.283914   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.461166   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.476935   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.476968   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:56.961147   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.966917   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.966949   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:57.461505   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:57.467290   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:38:57.482673   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:57.482709   47063 api_server.go:131] duration metric: took 5.021721974s to wait for apiserver health ...
	I0115 10:38:57.482721   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:57.482729   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:57.484809   47063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:57.486522   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:57.503036   47063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:57.523094   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:57.539289   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:57.539332   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:57.539342   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:57.539353   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:57.539361   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:57.539367   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:57.539372   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:57.539378   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:57.539392   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:57.539400   47063 system_pods.go:74] duration metric: took 16.288236ms to wait for pod list to return data ...
	I0115 10:38:57.539415   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:57.547016   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:57.547043   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:57.547053   47063 node_conditions.go:105] duration metric: took 7.632954ms to run NodePressure ...
	I0115 10:38:57.547069   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:57.838097   47063 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847919   47063 kubeadm.go:787] kubelet initialised
	I0115 10:38:57.847945   47063 kubeadm.go:788] duration metric: took 9.818012ms waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847960   47063 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:57.860753   47063 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.866623   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866666   47063 pod_ready.go:81] duration metric: took 5.881593ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.866679   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866687   47063 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.873742   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873772   47063 pod_ready.go:81] duration metric: took 7.070689ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.873787   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873795   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.881283   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881313   47063 pod_ready.go:81] duration metric: took 7.502343ms waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.881328   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881335   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.927473   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927504   47063 pod_ready.go:81] duration metric: took 46.159848ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.927516   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927523   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.329002   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329029   47063 pod_ready.go:81] duration metric: took 401.499694ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.329039   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329046   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.727362   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727394   47063 pod_ready.go:81] duration metric: took 398.336577ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.727411   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727420   47063 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:59.138162   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138195   47063 pod_ready.go:81] duration metric: took 410.766568ms waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:59.138207   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138214   47063 pod_ready.go:38] duration metric: took 1.290244752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:59.138232   47063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:59.173438   47063 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:59.173463   47063 kubeadm.go:640] restartCluster took 20.622435902s
	I0115 10:38:59.173473   47063 kubeadm.go:406] StartCluster complete in 20.676611158s
	I0115 10:38:59.173494   47063 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.173598   47063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:59.176160   47063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.176389   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:59.176558   47063 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:59.176645   47063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176652   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:59.176680   47063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.176696   47063 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:59.176706   47063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176725   47063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-709012"
	I0115 10:38:59.176768   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177130   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177163   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177188   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177220   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177254   47063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.177288   47063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.177305   47063 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:59.177390   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177796   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177849   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.182815   47063 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-709012" context rescaled to 1 replicas
	I0115 10:38:59.182849   47063 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:59.184762   47063 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:59.186249   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:59.196870   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0115 10:38:59.197111   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37331
	I0115 10:38:59.197493   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.197595   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.198074   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198096   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198236   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198264   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198410   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.198620   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.198634   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.199252   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.199278   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.202438   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0115 10:38:59.202957   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.203462   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.203489   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.203829   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.204271   47063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.204295   47063 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:59.204322   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.204406   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204434   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.204728   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204768   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.220973   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0115 10:38:59.221383   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.221873   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.221898   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.222330   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.222537   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.223337   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0115 10:38:59.223746   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35993
	I0115 10:38:59.224454   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.224557   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.227071   47063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:59.225090   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.225234   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.228609   47063 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.228624   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:59.228638   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.228668   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229046   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.229064   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229415   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229515   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229671   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.230070   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.230093   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.232470   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.233532   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.235985   47063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:56.206357   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.524032218s)
	I0115 10:38:56.206399   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0115 10:38:56.206444   46388 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: (2.52429359s)
	I0115 10:38:56.206494   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206580   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.524566038s)
	I0115 10:38:56.206594   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206609   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206684   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.52488513s)
	I0115 10:38:56.206806   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0115 10:38:56.206718   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.524535788s)
	I0115 10:38:56.206824   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0115 10:38:56.206756   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.524930105s)
	I0115 10:38:56.206843   46388 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.206863   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206780   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.478563083s)
	I0115 10:38:56.206890   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206908   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.986404   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0115 10:38:56.986480   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0115 10:38:56.986513   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:56.986555   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:59.063376   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.076785591s)
	I0115 10:38:59.063421   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0115 10:38:59.063449   46388 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.063494   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.234530   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.234543   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.237273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.237334   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:59.237349   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:59.237367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.237458   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.237624   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.237776   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.240471   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242356   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.242442   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.242483   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242538   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.245246   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.245394   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.251844   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I0115 10:38:59.252344   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.252855   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.252876   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.253245   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.253439   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.255055   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.255299   47063 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.255315   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:59.255331   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.258732   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259370   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.259408   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259554   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.259739   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.259915   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.260060   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.380593   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:59.380623   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:59.387602   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.409765   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.434624   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:59.434655   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:59.514744   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:59.514778   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:59.528401   47063 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:59.528428   47063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:38:59.552331   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:00.775084   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.365286728s)
	I0115 10:39:00.775119   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387483878s)
	I0115 10:39:00.775251   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775268   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775195   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775319   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775697   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775737   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775778   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.775791   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.775805   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775818   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.776009   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.776030   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778922   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.778939   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778949   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.778959   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.779172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.780377   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.780395   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.787873   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.787893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.788142   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.788161   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.882725   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330338587s)
	I0115 10:39:00.882775   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.882792   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883118   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883140   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883144   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.883150   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.883166   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883494   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883513   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883523   47063 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-709012"
	I0115 10:39:00.887782   47063 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:56.767524   46584 pod_ready.go:92] pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.767555   46584 pod_ready.go:81] duration metric: took 399.766724ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.767569   46584 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.776515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:00.777313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:03.358192   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.294671295s)
	I0115 10:39:03.358221   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0115 10:39:03.358249   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:03.358296   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:00.889422   47063 addons.go:505] enable addons completed in 1.71286662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:01.533332   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.534081   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.274613   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.277132   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.981700   46387 kubeadm.go:787] kubelet initialised
	I0115 10:39:05.981726   46387 kubeadm.go:788] duration metric: took 49.462651853s waiting for restarted kubelet to initialise ...
	I0115 10:39:05.981737   46387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:05.987142   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993872   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.993896   46387 pod_ready.go:81] duration metric: took 6.725677ms waiting for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993920   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999103   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.999133   46387 pod_ready.go:81] duration metric: took 5.204706ms waiting for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999148   46387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004449   46387 pod_ready.go:92] pod "etcd-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.004472   46387 pod_ready.go:81] duration metric: took 5.315188ms waiting for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004484   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010187   46387 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.010209   46387 pod_ready.go:81] duration metric: took 5.716918ms waiting for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010221   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380715   46387 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.380742   46387 pod_ready.go:81] duration metric: took 370.513306ms waiting for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380756   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780865   46387 pod_ready.go:92] pod "kube-proxy-w9fdn" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.780887   46387 pod_ready.go:81] duration metric: took 400.122851ms waiting for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780899   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179755   46387 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.179785   46387 pod_ready.go:81] duration metric: took 398.879027ms waiting for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179798   46387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.188315   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.429866   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.071542398s)
	I0115 10:39:05.429896   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0115 10:39:05.429927   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:05.429988   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:08.115120   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.685106851s)
	I0115 10:39:08.115147   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0115 10:39:08.115179   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:08.115226   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:05.540836   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:07.032884   47063 node_ready.go:49] node "default-k8s-diff-port-709012" has status "Ready":"True"
	I0115 10:39:07.032914   47063 node_ready.go:38] duration metric: took 7.504464113s waiting for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:39:07.032928   47063 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:07.042672   47063 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048131   47063 pod_ready.go:92] pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.048156   47063 pod_ready.go:81] duration metric: took 5.456337ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048167   47063 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053470   47063 pod_ready.go:92] pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.053492   47063 pod_ready.go:81] duration metric: took 5.316882ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053504   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.061828   47063 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.562201   47063 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.562235   47063 pod_ready.go:81] duration metric: took 2.508719163s waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.562248   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571588   47063 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.571614   47063 pod_ready.go:81] duration metric: took 9.356396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571628   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580269   47063 pod_ready.go:92] pod "kube-proxy-d8lcq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.580291   47063 pod_ready.go:81] duration metric: took 8.654269ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580305   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833621   47063 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.833646   47063 pod_ready.go:81] duration metric: took 253.332081ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833658   47063 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.776707   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.777515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.687740   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.187565   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.092236   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.976986955s)
	I0115 10:39:11.092266   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0115 10:39:11.092290   46388 cache_images.go:123] Successfully loaded all cached images
	I0115 10:39:11.092296   46388 cache_images.go:92] LoadImages completed in 18.018443053s
	I0115 10:39:11.092373   46388 ssh_runner.go:195] Run: crio config
	I0115 10:39:11.155014   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:11.155036   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:11.155056   46388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:39:11.155074   46388 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-824502 NodeName:no-preload-824502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:39:11.155203   46388 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-824502"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:39:11.155265   46388 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-824502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:39:11.155316   46388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0115 10:39:11.165512   46388 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:39:11.165586   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:39:11.175288   46388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0115 10:39:11.192730   46388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0115 10:39:11.209483   46388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0115 10:39:11.228296   46388 ssh_runner.go:195] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0115 10:39:11.232471   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:39:11.245041   46388 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502 for IP: 192.168.50.136
	I0115 10:39:11.245106   46388 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:11.245298   46388 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:39:11.245364   46388 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:39:11.245456   46388 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.key
	I0115 10:39:11.245551   46388 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key.cb5546de
	I0115 10:39:11.245617   46388 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key
	I0115 10:39:11.245769   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:39:11.245808   46388 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:39:11.245823   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:39:11.245855   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:39:11.245895   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:39:11.245937   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:39:11.246018   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:39:11.246987   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:39:11.272058   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:39:11.295425   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:39:11.320271   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:39:11.347161   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:39:11.372529   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:39:11.396765   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:39:11.419507   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:39:11.441814   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:39:11.463306   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:39:11.485830   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:39:11.510306   46388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:39:11.527095   46388 ssh_runner.go:195] Run: openssl version
	I0115 10:39:11.532483   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:39:11.543447   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548266   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548330   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.554228   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:39:11.564891   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:39:11.574809   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579217   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579257   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.584745   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:39:11.596117   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:39:11.606888   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611567   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611632   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.617307   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:39:11.627893   46388 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:39:11.632530   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:39:11.638562   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:39:11.644605   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:39:11.650917   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:39:11.656970   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:39:11.662948   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:39:11.669010   46388 kubeadm.go:404] StartCluster: {Name:no-preload-824502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:39:11.669093   46388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:39:11.669144   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:11.707521   46388 cri.go:89] found id: ""
	I0115 10:39:11.707594   46388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:39:11.719407   46388 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:39:11.719445   46388 kubeadm.go:636] restartCluster start
	I0115 10:39:11.719511   46388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:39:11.729609   46388 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.730839   46388 kubeconfig.go:92] found "no-preload-824502" server: "https://192.168.50.136:8443"
	I0115 10:39:11.733782   46388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:39:11.744363   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:11.744437   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:11.757697   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.245289   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.245389   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.258680   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.745234   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.745334   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.757934   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.244459   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.244549   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.256860   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.745400   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.745486   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.759185   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:14.244696   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.244774   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.257692   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.842044   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.339850   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.779637   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.278260   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.187668   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.187834   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.745104   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.745191   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.757723   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.244680   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.244760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.259042   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.744599   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.744692   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.761497   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.245412   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.245507   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.260040   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.744664   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.744752   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.757209   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.244739   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.244826   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.257922   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.744411   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.744528   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.756304   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.244475   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.244580   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.257372   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.744977   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.745072   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.758201   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:19.244832   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.244906   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.257468   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.342438   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.845282   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.776399   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.276057   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:20.686392   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:22.687613   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.745014   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.745076   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.757274   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.245246   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.245307   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.257735   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.745333   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.745422   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.757945   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.245022   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.245112   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.257351   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.744980   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.745057   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.756073   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.756099   46388 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:39:21.756107   46388 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:39:21.756116   46388 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:39:21.756167   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:21.800172   46388 cri.go:89] found id: ""
	I0115 10:39:21.800229   46388 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:39:21.815607   46388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:39:21.826460   46388 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:39:21.826525   46388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835735   46388 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835758   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:21.963603   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.673572   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.882139   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.975846   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:23.061284   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:39:23.061391   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:23.561760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.061736   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.562127   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:21.340520   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.340897   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:21.776123   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.776196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.777003   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:24.688163   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.187371   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.061818   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.561582   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.584837   46388 api_server.go:72] duration metric: took 2.523550669s to wait for apiserver process to appear ...
	I0115 10:39:25.584868   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:39:25.584893   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.585385   46388 api_server.go:269] stopped: https://192.168.50.136:8443/healthz: Get "https://192.168.50.136:8443/healthz": dial tcp 192.168.50.136:8443: connect: connection refused
	I0115 10:39:26.085248   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.546970   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.547007   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.547026   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.597433   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.597466   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.597482   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.342652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.343320   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.840652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.625537   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:29.625587   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.085614   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.091715   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.091745   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.585298   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.591889   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.591919   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:31.086028   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:31.091297   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:39:31.099702   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:39:31.099726   46388 api_server.go:131] duration metric: took 5.514851771s to wait for apiserver health ...
	I0115 10:39:31.099735   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:31.099741   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:31.102193   46388 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:39:28.275539   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:30.276634   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.104002   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:39:31.130562   46388 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:39:31.163222   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:39:31.186170   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:39:31.186201   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:39:31.186212   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:39:31.186222   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:39:31.186231   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:39:31.186242   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:39:31.186252   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:39:31.186263   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:39:31.186276   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:39:31.186284   46388 system_pods.go:74] duration metric: took 23.040188ms to wait for pod list to return data ...
	I0115 10:39:31.186292   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:39:31.215529   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:39:31.215567   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:39:31.215584   46388 node_conditions.go:105] duration metric: took 29.283674ms to run NodePressure ...
	I0115 10:39:31.215615   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:31.584238   46388 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590655   46388 kubeadm.go:787] kubelet initialised
	I0115 10:39:31.590679   46388 kubeadm.go:788] duration metric: took 6.418412ms waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590688   46388 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:31.603892   46388 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.612449   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612484   46388 pod_ready.go:81] duration metric: took 8.567896ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.612497   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612507   46388 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.622651   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622678   46388 pod_ready.go:81] duration metric: took 10.161967ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.622690   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622698   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.633893   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633917   46388 pod_ready.go:81] duration metric: took 11.210807ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.633929   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633937   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.639395   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639423   46388 pod_ready.go:81] duration metric: took 5.474128ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.639434   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639442   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.989202   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989242   46388 pod_ready.go:81] duration metric: took 349.786667ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.989255   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989264   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.387200   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387227   46388 pod_ready.go:81] duration metric: took 397.955919ms waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.387236   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387243   46388 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.789213   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789235   46388 pod_ready.go:81] duration metric: took 401.985079ms waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.789245   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789252   46388 pod_ready.go:38] duration metric: took 1.198551697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:32.789271   46388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:39:32.802883   46388 ops.go:34] apiserver oom_adj: -16
	I0115 10:39:32.802901   46388 kubeadm.go:640] restartCluster took 21.083448836s
	I0115 10:39:32.802908   46388 kubeadm.go:406] StartCluster complete in 21.133905255s
	I0115 10:39:32.802921   46388 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.802997   46388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:39:32.804628   46388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.804880   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:39:32.804990   46388 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:39:32.805075   46388 addons.go:69] Setting storage-provisioner=true in profile "no-preload-824502"
	I0115 10:39:32.805094   46388 addons.go:234] Setting addon storage-provisioner=true in "no-preload-824502"
	W0115 10:39:32.805102   46388 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:39:32.805108   46388 addons.go:69] Setting default-storageclass=true in profile "no-preload-824502"
	I0115 10:39:32.805128   46388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-824502"
	I0115 10:39:32.805128   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:39:32.805137   46388 addons.go:69] Setting metrics-server=true in profile "no-preload-824502"
	I0115 10:39:32.805165   46388 addons.go:234] Setting addon metrics-server=true in "no-preload-824502"
	I0115 10:39:32.805172   46388 host.go:66] Checking if "no-preload-824502" exists ...
	W0115 10:39:32.805175   46388 addons.go:243] addon metrics-server should already be in state true
	I0115 10:39:32.805219   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.805564   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805565   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805597   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805602   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805616   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805698   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.809596   46388 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-824502" context rescaled to 1 replicas
	I0115 10:39:32.809632   46388 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:39:32.812135   46388 out.go:177] * Verifying Kubernetes components...
	I0115 10:39:32.814392   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:39:32.823244   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	I0115 10:39:32.823758   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.823864   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0115 10:39:32.824287   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824306   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.824351   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.824693   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.824816   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.824833   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824857   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.825184   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.825778   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.825823   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.827847   46388 addons.go:234] Setting addon default-storageclass=true in "no-preload-824502"
	W0115 10:39:32.827864   46388 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:39:32.827883   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.828242   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.828286   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.838537   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0115 10:39:32.839162   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.839727   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.839747   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.841293   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.841862   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.841899   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.844309   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0115 10:39:32.844407   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32997
	I0115 10:39:32.844654   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.844941   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.845132   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845156   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.845712   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.845881   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845894   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.846316   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.846347   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.846910   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.847189   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.849126   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.851699   46388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:39:32.853268   46388 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:32.853284   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:39:32.853305   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.855997   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856372   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.856394   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856569   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.856716   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.856853   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.856975   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.861396   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I0115 10:39:32.861893   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.862379   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.862409   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.862874   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.863050   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.864195   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37983
	I0115 10:39:32.864480   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.866714   46388 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:39:32.864849   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.868242   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:39:32.868256   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:39:32.868274   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.868596   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.868613   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.869057   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.869306   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.870918   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.871163   46388 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:32.871177   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:39:32.871192   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.871252   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871670   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.871691   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871958   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.872127   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.872288   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.872463   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.874381   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875287   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.875314   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875478   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.875624   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.875786   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.875893   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.982357   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:33.059016   46388 node_ready.go:35] waiting up to 6m0s for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:33.059259   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:39:33.059281   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:39:33.060796   46388 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:39:33.060983   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:33.110608   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:39:33.110633   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:39:33.154857   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:33.154886   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:39:33.198495   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:34.178167   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.117123302s)
	I0115 10:39:34.178220   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178234   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178312   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.19592253s)
	I0115 10:39:34.178359   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178372   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178649   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178669   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178687   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178712   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178723   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178735   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178691   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178800   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178811   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178823   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178982   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179001   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.179003   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179040   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179057   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179075   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.186855   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.186875   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.187114   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.187135   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.187154   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.293778   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095231157s)
	I0115 10:39:34.293837   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.293861   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294161   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294184   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294194   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.294203   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294451   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294475   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294487   46388 addons.go:470] Verifying addon metrics-server=true in "no-preload-824502"
	I0115 10:39:34.296653   46388 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:39:29.687541   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.689881   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.692248   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.298179   46388 addons.go:505] enable addons completed in 1.493195038s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:31.842086   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.843601   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:32.775651   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.778997   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:36.186700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.688932   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:35.063999   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:37.068802   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:39.564287   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:36.341901   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.344615   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:37.278252   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:39.780035   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:41.186854   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.687410   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:40.063481   46388 node_ready.go:49] node "no-preload-824502" has status "Ready":"True"
	I0115 10:39:40.063509   46388 node_ready.go:38] duration metric: took 7.00445832s waiting for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:40.063521   46388 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:40.069733   46388 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077511   46388 pod_ready.go:92] pod "coredns-76f75df574-ft2wt" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.077539   46388 pod_ready.go:81] duration metric: took 7.783253ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077549   46388 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082665   46388 pod_ready.go:92] pod "etcd-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.082693   46388 pod_ready.go:81] duration metric: took 5.137636ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082704   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087534   46388 pod_ready.go:92] pod "kube-apiserver-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.087552   46388 pod_ready.go:81] duration metric: took 4.840583ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087563   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092447   46388 pod_ready.go:92] pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.092473   46388 pod_ready.go:81] duration metric: took 4.90114ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092493   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464047   46388 pod_ready.go:92] pod "kube-proxy-nlk2h" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.464065   46388 pod_ready.go:81] duration metric: took 371.565815ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464075   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:42.472255   46388 pod_ready.go:102] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.471011   46388 pod_ready.go:92] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:43.471033   46388 pod_ready.go:81] duration metric: took 3.006951578s waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:43.471045   46388 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.841668   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.842151   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.277636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:44.787510   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:46.187891   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:48.687578   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.478255   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.978120   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.340455   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.341486   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.840829   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.275430   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.776946   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.188236   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.686748   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.980682   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:52.479488   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.840971   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.841513   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.778023   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.275602   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:55.687892   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.186665   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.978059   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.978213   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.978881   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.341772   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.841021   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.775700   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:59.274671   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:01.280895   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.186976   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:02.688712   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.978942   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.482480   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.841912   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.340823   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.775015   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.776664   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.185744   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.185877   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:09.187192   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.979141   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.479235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.840997   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.842100   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.278110   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.775278   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:11.686672   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.187037   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.978475   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.978621   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.346343   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.841357   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.841981   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:13.278313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:15.777340   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.188343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.687840   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.979177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.981550   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.478364   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:17.340973   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.341317   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.275525   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:20.277493   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.187342   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.693743   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.480386   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.481947   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.341650   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.841949   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:22.777674   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.273859   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:26.186846   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.188206   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.978266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.979824   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.842629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.341954   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.274109   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:29.275517   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:31.277396   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.688520   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.187343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.478712   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:32.978549   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.843559   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.340435   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.278639   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.777051   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.186611   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:34.978720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:37.488790   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.841994   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.340074   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.278319   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.776206   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:39.978911   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.478331   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.187741   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.687320   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.340766   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.341909   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.843116   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.777726   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.777953   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:45.188685   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.687270   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.978841   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.477932   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.478482   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.340237   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.341936   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.275247   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.777753   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.688548   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.187385   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.188261   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.478562   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.978677   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.840537   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.842188   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.278594   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.774847   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.687614   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:59.186203   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.479325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.979266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.340295   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.342857   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.776968   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.777421   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.278730   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.186645   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.187583   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.478127   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.478816   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:00.841474   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.340255   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.775648   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.779261   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.687557   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.688081   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.979671   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.478240   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.345230   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.841561   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:09.841629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.275641   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.276466   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.187771   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.688852   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.478832   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.978808   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:11.841717   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.341355   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.775133   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.274677   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.186001   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.186387   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.186931   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.979099   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.478539   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:16.841294   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:18.842244   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.776623   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:20.274196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.187095   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.689700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.978471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.478169   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.479319   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.341851   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.343663   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.275134   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.276420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.185307   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.186549   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.978977   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.979239   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:25.840539   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:27.840819   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:29.842580   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.775069   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.775244   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.275239   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:30.187482   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.687454   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.478330   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.479265   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.340974   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.342201   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.275561   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.775652   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.687487   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.689628   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:39.186244   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.979235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.981609   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.342452   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:38.841213   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.775893   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.274573   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.186313   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.687042   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.478993   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.479953   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.341359   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.840325   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.775636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.275821   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.687911   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.186598   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:44.977946   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:46.980471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.477591   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.841849   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.341443   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:47.276441   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.775182   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.687273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.187451   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.480325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.979440   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.841657   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.341257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.776199   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:54.274920   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.188121   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.191970   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.478903   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:58.979288   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.341479   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.841144   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.841215   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.775625   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.276127   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.687860   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:02.188506   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.480582   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:03.977715   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.841608   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.340546   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.775220   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.274050   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.277327   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.688269   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.187187   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:05.977760   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.978356   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.340629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.341333   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.775130   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.776410   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.686836   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.187035   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.187814   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.978478   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.477854   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.477883   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.341625   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.841300   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.842745   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:13.276029   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:15.774949   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.686998   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.689531   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.478177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.978154   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.844053   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:19.339915   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:17.775988   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:20.276213   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.187144   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.188273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.479275   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.977720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.342019   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.343747   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:22.775222   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.274922   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.186701   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.979093   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.478022   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.843596   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.340257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:27.275420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:29.275918   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:31.276702   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.186796   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.686406   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.478933   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.978757   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.341780   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.842117   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:33.774432   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.775822   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:34.687304   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:36.687850   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.187956   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.478261   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.978198   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.341314   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.840626   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.842475   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:38.275042   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:40.774892   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.686479   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.688800   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.980119   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:42.478070   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.478709   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.844661   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.340617   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.278574   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:45.775324   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.185760   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.186399   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.479381   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.979086   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.842369   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:49.341153   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:47.776338   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.275329   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.187219   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.687370   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.479573   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.978568   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.840818   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.842279   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.776812   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:54.780747   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.187111   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:57.187263   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.478479   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.977687   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.846775   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.340913   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.768584   46584 pod_ready.go:81] duration metric: took 4m0.001000825s waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:42:56.768615   46584 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:42:56.768623   46584 pod_ready.go:38] duration metric: took 4m9.613401399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:42:56.768641   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:42:56.768686   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:42:56.768739   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:42:56.842276   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:56.842298   46584 cri.go:89] found id: ""
	I0115 10:42:56.842309   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:42:56.842361   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.847060   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:42:56.847118   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:42:56.887059   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:56.887092   46584 cri.go:89] found id: ""
	I0115 10:42:56.887100   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:42:56.887158   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.893238   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:42:56.893289   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:42:56.933564   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:56.933593   46584 cri.go:89] found id: ""
	I0115 10:42:56.933603   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:42:56.933657   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.937882   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:42:56.937958   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:42:56.980953   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:56.980989   46584 cri.go:89] found id: ""
	I0115 10:42:56.980999   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:42:56.981051   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.985008   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:42:56.985058   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:42:57.026275   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:57.026305   46584 cri.go:89] found id: ""
	I0115 10:42:57.026315   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:42:57.026373   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.030799   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:42:57.030885   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:42:57.071391   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:42:57.071416   46584 cri.go:89] found id: ""
	I0115 10:42:57.071424   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:42:57.071485   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.076203   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:42:57.076254   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:42:57.119035   46584 cri.go:89] found id: ""
	I0115 10:42:57.119062   46584 logs.go:284] 0 containers: []
	W0115 10:42:57.119069   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:42:57.119074   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:42:57.119129   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:42:57.167335   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:57.167355   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:57.167360   46584 cri.go:89] found id: ""
	I0115 10:42:57.167367   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:42:57.167411   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.171919   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.176255   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:42:57.176284   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:42:57.328501   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:42:57.328538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:57.390279   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:42:57.390309   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:57.886607   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:42:57.886645   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:42:57.937391   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:42:57.937420   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:42:58.001313   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:42:58.001348   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:42:58.016772   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:42:58.016804   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:58.060489   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:42:58.060516   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:58.102993   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:42:58.103043   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:58.140732   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:42:58.140764   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:58.191891   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:42:58.191927   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:58.235836   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:42:58.235861   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:58.277424   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:42:58.277465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:00.844771   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:00.862922   46584 api_server.go:72] duration metric: took 4m17.850865s to wait for apiserver process to appear ...
	I0115 10:43:00.862946   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:00.862992   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:00.863055   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:00.909986   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:00.910013   46584 cri.go:89] found id: ""
	I0115 10:43:00.910020   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:00.910066   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.915553   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:00.915634   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:00.969923   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:00.969951   46584 cri.go:89] found id: ""
	I0115 10:43:00.969961   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:00.970021   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.974739   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:00.974805   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:01.024283   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.024305   46584 cri.go:89] found id: ""
	I0115 10:43:01.024314   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:01.024366   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.029325   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:01.029388   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:01.070719   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.070746   46584 cri.go:89] found id: ""
	I0115 10:43:01.070755   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:01.070806   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.074906   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:01.074969   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:01.111715   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.111747   46584 cri.go:89] found id: ""
	I0115 10:43:01.111756   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:01.111805   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.116173   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:01.116225   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:01.157760   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.157791   46584 cri.go:89] found id: ""
	I0115 10:43:01.157802   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:01.157866   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.161944   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:01.162010   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:01.201888   46584 cri.go:89] found id: ""
	I0115 10:43:01.201915   46584 logs.go:284] 0 containers: []
	W0115 10:43:01.201925   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:01.201932   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:01.201990   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:01.244319   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.244346   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.244352   46584 cri.go:89] found id: ""
	I0115 10:43:01.244361   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:01.244454   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.248831   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.253617   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:01.253643   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:01.309426   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:01.309465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.346755   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:01.346789   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.385238   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:01.385266   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.423907   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:01.423941   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.480867   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:01.480902   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:01.538367   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:01.538403   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.580240   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:01.580273   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.622561   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:01.622602   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:01.675436   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:01.675463   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:59.687714   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.186463   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.982902   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:03.478178   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.840619   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.841154   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:04.842905   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.080545   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:02.080578   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:02.144713   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:02.144756   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:02.160120   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:02.160147   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:04.776113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:43:04.782741   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:43:04.783959   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:04.783979   46584 api_server.go:131] duration metric: took 3.92102734s to wait for apiserver health ...
	I0115 10:43:04.783986   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:04.784019   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:04.784071   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:04.832660   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:04.832685   46584 cri.go:89] found id: ""
	I0115 10:43:04.832695   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:04.832750   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.836959   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:04.837009   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:04.878083   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:04.878103   46584 cri.go:89] found id: ""
	I0115 10:43:04.878110   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:04.878160   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.882581   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:04.882642   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:04.927778   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:04.927798   46584 cri.go:89] found id: ""
	I0115 10:43:04.927805   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:04.927848   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.932822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:04.932891   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:04.975930   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:04.975955   46584 cri.go:89] found id: ""
	I0115 10:43:04.975965   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:04.976010   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.980744   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:04.980803   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:05.024300   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.024325   46584 cri.go:89] found id: ""
	I0115 10:43:05.024332   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:05.024383   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.029091   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:05.029159   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:05.081239   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.081264   46584 cri.go:89] found id: ""
	I0115 10:43:05.081273   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:05.081332   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.085822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:05.085879   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:05.126839   46584 cri.go:89] found id: ""
	I0115 10:43:05.126884   46584 logs.go:284] 0 containers: []
	W0115 10:43:05.126896   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:05.126903   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:05.126963   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:05.168241   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.168269   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.168276   46584 cri.go:89] found id: ""
	I0115 10:43:05.168285   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:05.168343   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.173309   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.177144   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:05.177164   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:05.239116   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:05.239148   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:05.368712   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:05.368745   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:05.429504   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:05.429540   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:05.473181   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:05.473216   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.510948   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:05.510974   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.551052   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:05.551082   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.606711   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:05.606746   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:05.661634   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:05.661663   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:05.675627   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:05.675656   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:05.736266   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:05.736305   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.775567   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:05.775597   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:06.111495   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:06.111531   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:08.661238   46584 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:08.661275   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.661282   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.661288   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.661294   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.661300   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.661306   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.661316   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.661324   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.661335   46584 system_pods.go:74] duration metric: took 3.877343546s to wait for pod list to return data ...
	I0115 10:43:08.661342   46584 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:08.664367   46584 default_sa.go:45] found service account: "default"
	I0115 10:43:08.664393   46584 default_sa.go:55] duration metric: took 3.04125ms for default service account to be created ...
	I0115 10:43:08.664408   46584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:08.672827   46584 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:08.672852   46584 system_pods.go:89] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.672860   46584 system_pods.go:89] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.672867   46584 system_pods.go:89] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.672873   46584 system_pods.go:89] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.672879   46584 system_pods.go:89] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.672885   46584 system_pods.go:89] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.672895   46584 system_pods.go:89] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.672906   46584 system_pods.go:89] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.672920   46584 system_pods.go:126] duration metric: took 8.505614ms to wait for k8s-apps to be running ...
	I0115 10:43:08.672933   46584 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:08.672984   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:08.690592   46584 system_svc.go:56] duration metric: took 17.651896ms WaitForService to wait for kubelet.
	I0115 10:43:08.690618   46584 kubeadm.go:581] duration metric: took 4m25.678563679s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:08.690640   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:08.694652   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:08.694679   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:08.694692   46584 node_conditions.go:105] duration metric: took 4.045505ms to run NodePressure ...
	I0115 10:43:08.694705   46584 start.go:228] waiting for startup goroutines ...
	I0115 10:43:08.694713   46584 start.go:233] waiting for cluster config update ...
	I0115 10:43:08.694725   46584 start.go:242] writing updated cluster config ...
	I0115 10:43:08.694991   46584 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:08.747501   46584 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:08.750319   46584 out.go:177] * Done! kubectl is now configured to use "embed-certs-781270" cluster and "default" namespace by default
	I0115 10:43:04.686284   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:06.703127   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.180590   46387 pod_ready.go:81] duration metric: took 4m0.000776944s waiting for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:07.180624   46387 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0115 10:43:07.180644   46387 pod_ready.go:38] duration metric: took 4m1.198895448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:07.180669   46387 kubeadm.go:640] restartCluster took 5m11.875261334s
	W0115 10:43:07.180729   46387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0115 10:43:07.180765   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0115 10:43:05.479764   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.978536   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.343529   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841510   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841533   47063 pod_ready.go:81] duration metric: took 4m0.007868879s waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:09.841542   47063 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:09.841549   47063 pod_ready.go:38] duration metric: took 4m2.808610487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:09.841562   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:09.841584   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:09.841625   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:12.165729   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.984931075s)
	I0115 10:43:12.165790   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:12.178710   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:43:12.188911   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:43:12.199329   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:43:12.199377   46387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0115 10:43:12.411245   46387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 10:43:09.980448   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:12.478625   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:14.479234   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.904898   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:09.904921   47063 cri.go:89] found id: ""
	I0115 10:43:09.904930   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:09.904996   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.911493   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:09.911557   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:09.958040   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:09.958060   47063 cri.go:89] found id: ""
	I0115 10:43:09.958070   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:09.958122   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.962914   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:09.962972   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:10.033848   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:10.033875   47063 cri.go:89] found id: ""
	I0115 10:43:10.033885   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:10.033946   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.043173   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:10.043232   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:10.088380   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:10.088405   47063 cri.go:89] found id: ""
	I0115 10:43:10.088415   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:10.088478   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.094288   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:10.094350   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:10.145428   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:10.145453   47063 cri.go:89] found id: ""
	I0115 10:43:10.145463   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:10.145547   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.150557   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:10.150637   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:10.206875   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:10.206901   47063 cri.go:89] found id: ""
	I0115 10:43:10.206915   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:10.206971   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.211979   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:10.212039   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:10.260892   47063 cri.go:89] found id: ""
	I0115 10:43:10.260914   47063 logs.go:284] 0 containers: []
	W0115 10:43:10.260924   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:10.260936   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:10.260987   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:10.315938   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.315970   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:10.315978   47063 cri.go:89] found id: ""
	I0115 10:43:10.315987   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:10.316045   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.324077   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.332727   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:10.332756   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.376006   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:10.376034   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:10.967301   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:10.967337   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:11.033301   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:11.033327   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:11.091151   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:11.091184   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:11.145411   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:11.145447   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:11.194249   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:11.194274   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:11.373988   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:11.374020   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:11.442754   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:11.442788   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:11.486282   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:11.486315   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:11.547428   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:11.547464   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:11.560977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:11.561005   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:11.603150   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:11.603179   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.149324   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:14.166360   47063 api_server.go:72] duration metric: took 4m14.983478755s to wait for apiserver process to appear ...
	I0115 10:43:14.166391   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:14.166444   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:14.166504   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:14.211924   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:14.211950   47063 cri.go:89] found id: ""
	I0115 10:43:14.211961   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:14.212018   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.216288   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:14.216352   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:14.264237   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:14.264270   47063 cri.go:89] found id: ""
	I0115 10:43:14.264280   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:14.264338   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.268883   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:14.268947   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:14.329606   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:14.329631   47063 cri.go:89] found id: ""
	I0115 10:43:14.329639   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:14.329694   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.334069   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:14.334133   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:14.374753   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.374779   47063 cri.go:89] found id: ""
	I0115 10:43:14.374788   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:14.374842   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.380452   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:14.380529   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:14.422341   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:14.422371   47063 cri.go:89] found id: ""
	I0115 10:43:14.422380   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:14.422444   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.427106   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:14.427169   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:14.469410   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:14.469440   47063 cri.go:89] found id: ""
	I0115 10:43:14.469450   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:14.469511   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.475098   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:14.475216   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:14.533771   47063 cri.go:89] found id: ""
	I0115 10:43:14.533794   47063 logs.go:284] 0 containers: []
	W0115 10:43:14.533800   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:14.533805   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:14.533876   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:14.573458   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:14.573483   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:14.573490   47063 cri.go:89] found id: ""
	I0115 10:43:14.573498   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:14.573561   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.578186   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.583133   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:14.583157   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.631142   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:14.631180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:16.978406   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:18.979879   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:15.076904   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:15.076958   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:15.129739   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:15.129778   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:15.169656   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:15.169685   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:15.229569   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:15.229616   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:15.293037   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:15.293075   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:15.351198   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:15.351243   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:15.394604   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:15.394642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:15.451142   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:15.451180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:15.466108   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:15.466146   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:15.595576   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:15.595615   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:15.643711   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:15.643740   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.200861   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:43:18.207576   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:43:18.208943   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:18.208964   47063 api_server.go:131] duration metric: took 4.042566476s to wait for apiserver health ...
	I0115 10:43:18.208971   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:18.208992   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:18.209037   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:18.254324   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.254353   47063 cri.go:89] found id: ""
	I0115 10:43:18.254361   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:18.254405   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.258765   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:18.258844   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:18.303785   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.303811   47063 cri.go:89] found id: ""
	I0115 10:43:18.303820   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:18.303880   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.308940   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:18.309009   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:18.358850   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:18.358878   47063 cri.go:89] found id: ""
	I0115 10:43:18.358888   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:18.358954   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.363588   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:18.363656   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:18.412797   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.412820   47063 cri.go:89] found id: ""
	I0115 10:43:18.412828   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:18.412878   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.418704   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:18.418765   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:18.460050   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:18.460074   47063 cri.go:89] found id: ""
	I0115 10:43:18.460083   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:18.460138   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.465581   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:18.465642   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:18.516632   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.516656   47063 cri.go:89] found id: ""
	I0115 10:43:18.516665   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:18.516719   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.521873   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:18.521935   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:18.574117   47063 cri.go:89] found id: ""
	I0115 10:43:18.574145   47063 logs.go:284] 0 containers: []
	W0115 10:43:18.574154   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:18.574161   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:18.574222   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:18.630561   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.630593   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:18.630599   47063 cri.go:89] found id: ""
	I0115 10:43:18.630606   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:18.630666   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.636059   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.640707   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:18.640728   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.681635   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:18.681667   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:18.803880   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:18.803913   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.864605   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:18.864642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.918210   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:18.918250   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.960702   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:18.960733   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:19.013206   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:19.013242   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:19.070193   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:19.070230   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:19.087983   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:19.088023   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:19.150096   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:19.150132   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:19.196977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:19.197006   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:19.244166   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:19.244202   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:19.290314   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:19.290349   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:22.182766   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:22.182794   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.182801   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.182808   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.182814   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.182820   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.182826   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.182836   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.182848   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.182858   47063 system_pods.go:74] duration metric: took 3.973880704s to wait for pod list to return data ...
	I0115 10:43:22.182869   47063 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:22.186304   47063 default_sa.go:45] found service account: "default"
	I0115 10:43:22.186344   47063 default_sa.go:55] duration metric: took 3.464907ms for default service account to be created ...
	I0115 10:43:22.186354   47063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:22.192564   47063 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:22.192595   47063 system_pods.go:89] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.192604   47063 system_pods.go:89] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.192611   47063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.192620   47063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.192627   47063 system_pods.go:89] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.192634   47063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.192644   47063 system_pods.go:89] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.192651   47063 system_pods.go:89] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.192661   47063 system_pods.go:126] duration metric: took 6.301001ms to wait for k8s-apps to be running ...
	I0115 10:43:22.192669   47063 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:22.192720   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:22.210150   47063 system_svc.go:56] duration metric: took 17.476738ms WaitForService to wait for kubelet.
	I0115 10:43:22.210169   47063 kubeadm.go:581] duration metric: took 4m23.02729406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:22.210190   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:22.214086   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:22.214111   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:22.214124   47063 node_conditions.go:105] duration metric: took 3.928309ms to run NodePressure ...
	I0115 10:43:22.214137   47063 start.go:228] waiting for startup goroutines ...
	I0115 10:43:22.214146   47063 start.go:233] waiting for cluster config update ...
	I0115 10:43:22.214158   47063 start.go:242] writing updated cluster config ...
	I0115 10:43:22.214394   47063 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:22.264250   47063 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:22.267546   47063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-709012" cluster and "default" namespace by default
	I0115 10:43:20.980266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:23.478672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.109313   46387 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0115 10:43:26.109392   46387 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 10:43:26.109501   46387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 10:43:26.109621   46387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 10:43:26.109750   46387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 10:43:26.109926   46387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 10:43:26.110051   46387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 10:43:26.110114   46387 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0115 10:43:26.110201   46387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 10:43:26.112841   46387 out.go:204]   - Generating certificates and keys ...
	I0115 10:43:26.112937   46387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 10:43:26.113031   46387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 10:43:26.113142   46387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0115 10:43:26.113237   46387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0115 10:43:26.113336   46387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0115 10:43:26.113414   46387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0115 10:43:26.113530   46387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0115 10:43:26.113617   46387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0115 10:43:26.113717   46387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0115 10:43:26.113814   46387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0115 10:43:26.113867   46387 kubeadm.go:322] [certs] Using the existing "sa" key
	I0115 10:43:26.113959   46387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 10:43:26.114029   46387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 10:43:26.114128   46387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 10:43:26.114214   46387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 10:43:26.114289   46387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 10:43:26.114400   46387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 10:43:26.115987   46387 out.go:204]   - Booting up control plane ...
	I0115 10:43:26.116100   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 10:43:26.116240   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 10:43:26.116349   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 10:43:26.116476   46387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 10:43:26.116677   46387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 10:43:26.116792   46387 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.004579 seconds
	I0115 10:43:26.116908   46387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 10:43:26.117097   46387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 10:43:26.117187   46387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 10:43:26.117349   46387 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-206509 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0115 10:43:26.117437   46387 kubeadm.go:322] [bootstrap-token] Using token: zc1jed.g57dxx99f2u8lwfg
	I0115 10:43:26.118960   46387 out.go:204]   - Configuring RBAC rules ...
	I0115 10:43:26.119074   46387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 10:43:26.119258   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 10:43:26.119401   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 10:43:26.119538   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 10:43:26.119657   46387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 10:43:26.119723   46387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 10:43:26.119796   46387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 10:43:26.119809   46387 kubeadm.go:322] 
	I0115 10:43:26.119857   46387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 10:43:26.119863   46387 kubeadm.go:322] 
	I0115 10:43:26.119923   46387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 10:43:26.119930   46387 kubeadm.go:322] 
	I0115 10:43:26.119950   46387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 10:43:26.120002   46387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 10:43:26.120059   46387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 10:43:26.120078   46387 kubeadm.go:322] 
	I0115 10:43:26.120120   46387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 10:43:26.120185   46387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 10:43:26.120249   46387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 10:43:26.120255   46387 kubeadm.go:322] 
	I0115 10:43:26.120359   46387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0115 10:43:26.120426   46387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 10:43:26.120433   46387 kubeadm.go:322] 
	I0115 10:43:26.120512   46387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120660   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 \
	I0115 10:43:26.120687   46387 kubeadm.go:322]     --control-plane 	  
	I0115 10:43:26.120691   46387 kubeadm.go:322] 
	I0115 10:43:26.120757   46387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 10:43:26.120763   46387 kubeadm.go:322] 
	I0115 10:43:26.120831   46387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120969   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 10:43:26.120990   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:43:26.121000   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:43:26.122557   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:43:25.977703   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:27.979775   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.123754   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:43:26.133514   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:43:26.152666   46387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:43:26.152776   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.152794   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=old-k8s-version-206509 minikube.k8s.io/updated_at=2024_01_15T10_43_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.205859   46387 ops.go:34] apiserver oom_adj: -16
	I0115 10:43:26.398371   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.899064   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.398532   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.898380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.398986   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.899140   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.399224   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.898397   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.399321   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.899035   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.398549   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.898547   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.399096   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.898492   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.399077   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.899311   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:34.398839   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.980789   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:31.981727   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.479518   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.398611   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.898531   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.399422   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.898569   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.399432   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.399017   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.898561   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:39.398551   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.977916   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:38.978672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:39.899402   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.398556   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.898384   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:41.035213   46387 kubeadm.go:1088] duration metric: took 14.882479947s to wait for elevateKubeSystemPrivileges.
	I0115 10:43:41.035251   46387 kubeadm.go:406] StartCluster complete in 5m45.791159963s
	I0115 10:43:41.035271   46387 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.035357   46387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:43:41.037947   46387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.038220   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:43:41.038242   46387 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:43:41.038314   46387 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038317   46387 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038333   46387 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-206509"
	I0115 10:43:41.038334   46387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-206509"
	W0115 10:43:41.038341   46387 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:43:41.038389   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038388   46387 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038405   46387 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-206509"
	W0115 10:43:41.038428   46387 addons.go:243] addon metrics-server should already be in state true
	I0115 10:43:41.038446   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:43:41.038467   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038724   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038738   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038783   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038787   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038815   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038909   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.054942   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0115 10:43:41.055314   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.055844   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.055868   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.056312   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.056464   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0115 10:43:41.056853   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.056878   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.056910   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.057198   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0115 10:43:41.057317   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057341   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.057532   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.057682   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.057844   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.057955   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057979   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.058300   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.058921   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.058952   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.061947   46387 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-206509"
	W0115 10:43:41.061973   46387 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:43:41.061999   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.062381   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.062405   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.075135   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33773
	I0115 10:43:41.075593   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.075704   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0115 10:43:41.076514   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.076536   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.076723   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.077196   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.077219   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.077225   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077564   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077607   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.077723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.080161   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.080238   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.082210   46387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:43:41.083883   46387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:43:41.085452   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:43:41.085477   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:43:41.083855   46387 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.085496   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.085496   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:43:41.085511   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.086304   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0115 10:43:41.086675   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.087100   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.087120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.087465   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.087970   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.088011   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.090492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.091743   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092335   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092355   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092675   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092695   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092833   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.092969   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.093129   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.093233   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.094042   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.094209   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.094296   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.094372   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.105226   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0115 10:43:41.105644   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.106092   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.106120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.106545   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.106759   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.108735   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.109022   46387 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.109040   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:43:41.109057   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.112322   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112771   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.112797   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112914   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.113100   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.113279   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.113442   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.353016   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:43:41.353038   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:43:41.357846   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.365469   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.465358   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:43:41.465379   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:43:41.532584   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:41.532612   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:43:41.598528   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 10:43:41.605798   46387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-206509" context rescaled to 1 replicas
	I0115 10:43:41.605838   46387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:43:41.607901   46387 out.go:177] * Verifying Kubernetes components...
	I0115 10:43:41.609363   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:41.608778   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:42.634034   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268517129s)
	I0115 10:43:42.634071   46387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.024689682s)
	I0115 10:43:42.634090   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634095   46387 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.634103   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634046   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.035489058s)
	I0115 10:43:42.634140   46387 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0115 10:43:42.634200   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.276326924s)
	I0115 10:43:42.634228   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634243   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634451   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634495   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634515   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634525   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634534   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634540   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634557   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634570   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634580   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634589   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634896   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634912   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634967   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634997   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.635008   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.656600   46387 node_ready.go:49] node "old-k8s-version-206509" has status "Ready":"True"
	I0115 10:43:42.656629   46387 node_ready.go:38] duration metric: took 22.522223ms waiting for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.656640   46387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:42.714802   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.714834   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.715273   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.715277   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.715303   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.722261   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:42.792908   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183451396s)
	I0115 10:43:42.792964   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.792982   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793316   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793339   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793352   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.793361   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793580   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793625   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793638   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793649   46387 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-206509"
	I0115 10:43:42.796113   46387 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:43:42.798128   46387 addons.go:505] enable addons completed in 1.759885904s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:43:40.979360   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477862   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477895   46388 pod_ready.go:81] duration metric: took 4m0.006840717s waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:43.477906   46388 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:43.477915   46388 pod_ready.go:38] duration metric: took 4m3.414382685s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:43.477933   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:43.477963   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:43.478033   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:43.533796   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:43.533825   46388 cri.go:89] found id: ""
	I0115 10:43:43.533836   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:43.533893   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.540165   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:43.540224   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:43.576831   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:43.576853   46388 cri.go:89] found id: ""
	I0115 10:43:43.576861   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:43.576922   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.581556   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:43.581616   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:43.625292   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.625315   46388 cri.go:89] found id: ""
	I0115 10:43:43.625323   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:43.625371   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.630741   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:43.630803   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:43.682511   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:43.682553   46388 cri.go:89] found id: ""
	I0115 10:43:43.682563   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:43.682621   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.688126   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:43.688194   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:43.739847   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.739866   46388 cri.go:89] found id: ""
	I0115 10:43:43.739873   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:43.739919   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.744569   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:43.744635   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:43.787603   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:43.787627   46388 cri.go:89] found id: ""
	I0115 10:43:43.787635   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:43.787676   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.792209   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:43.792271   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:43.838530   46388 cri.go:89] found id: ""
	I0115 10:43:43.838557   46388 logs.go:284] 0 containers: []
	W0115 10:43:43.838568   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:43.838576   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:43.838636   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:43.885727   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:43.885755   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:43.885761   46388 cri.go:89] found id: ""
	I0115 10:43:43.885769   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:43.885822   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.891036   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.895462   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:43.895493   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.939544   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:43.939568   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.985944   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:43.985973   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:44.052893   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:44.052923   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:44.116539   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:44.116569   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:44.173390   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:44.173432   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:44.194269   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:44.194295   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:44.239908   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:44.239935   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:44.729495   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:46.231080   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:46.231100   46387 pod_ready.go:81] duration metric: took 3.50881186s waiting for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:46.231109   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:48.239378   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:44.737413   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:44.737445   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:44.891846   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:44.891875   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:44.951418   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:44.951453   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:45.000171   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:45.000201   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:45.041629   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:45.041657   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.586439   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:47.602078   46388 api_server.go:72] duration metric: took 4m14.792413378s to wait for apiserver process to appear ...
	I0115 10:43:47.602102   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:47.602138   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:47.602193   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:47.646259   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:47.646283   46388 cri.go:89] found id: ""
	I0115 10:43:47.646291   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:47.646346   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.650757   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:47.650830   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:47.691688   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.691715   46388 cri.go:89] found id: ""
	I0115 10:43:47.691724   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:47.691777   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.696380   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:47.696467   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:47.738315   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:47.738340   46388 cri.go:89] found id: ""
	I0115 10:43:47.738349   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:47.738402   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.742810   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:47.742870   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:47.783082   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:47.783114   46388 cri.go:89] found id: ""
	I0115 10:43:47.783124   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:47.783178   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.787381   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:47.787432   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:47.832325   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:47.832353   46388 cri.go:89] found id: ""
	I0115 10:43:47.832363   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:47.832420   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.836957   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:47.837014   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:47.877146   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:47.877169   46388 cri.go:89] found id: ""
	I0115 10:43:47.877178   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:47.877231   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.881734   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:47.881782   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:47.921139   46388 cri.go:89] found id: ""
	I0115 10:43:47.921169   46388 logs.go:284] 0 containers: []
	W0115 10:43:47.921180   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:47.921188   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:47.921236   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:47.959829   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:47.959857   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:47.959864   46388 cri.go:89] found id: ""
	I0115 10:43:47.959872   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:47.959924   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.964105   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.968040   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:47.968059   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:48.017234   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:48.017266   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:48.073552   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:48.073583   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:48.512500   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:48.512539   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:48.564545   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:48.564578   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:48.609739   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:48.609768   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:48.654076   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:48.654106   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:48.691287   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:48.691314   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:48.739023   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:48.739063   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:48.791976   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:48.792018   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:48.808633   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:48.808659   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:48.933063   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:48.933099   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:48.974794   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:48.974825   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:49.735197   46387 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735227   46387 pod_ready.go:81] duration metric: took 3.504112323s waiting for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:49.735237   46387 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735243   46387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740497   46387 pod_ready.go:92] pod "kube-proxy-lh96p" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:49.740515   46387 pod_ready.go:81] duration metric: took 5.267229ms waiting for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740525   46387 pod_ready.go:38] duration metric: took 7.083874855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:49.740537   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:49.740580   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:49.755697   46387 api_server.go:72] duration metric: took 8.149828702s to wait for apiserver process to appear ...
	I0115 10:43:49.755718   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:49.755731   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:43:49.762148   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:43:49.762995   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:43:49.763013   46387 api_server.go:131] duration metric: took 7.290279ms to wait for apiserver health ...
	I0115 10:43:49.763019   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:49.766597   46387 system_pods.go:59] 4 kube-system pods found
	I0115 10:43:49.766615   46387 system_pods.go:61] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.766620   46387 system_pods.go:61] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.766626   46387 system_pods.go:61] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.766631   46387 system_pods.go:61] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.766637   46387 system_pods.go:74] duration metric: took 3.613036ms to wait for pod list to return data ...
	I0115 10:43:49.766642   46387 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:49.768826   46387 default_sa.go:45] found service account: "default"
	I0115 10:43:49.768844   46387 default_sa.go:55] duration metric: took 2.197235ms for default service account to be created ...
	I0115 10:43:49.768850   46387 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:49.772271   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:49.772296   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.772304   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.772314   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.772321   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.772339   46387 retry.go:31] will retry after 223.439669ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.001140   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.001165   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.001170   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.001176   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.001181   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.001198   46387 retry.go:31] will retry after 329.400473ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.335362   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.335386   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.335391   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.335398   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.335403   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.335420   46387 retry.go:31] will retry after 466.919302ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.806617   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.806643   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.806649   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.806655   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.806660   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.806678   46387 retry.go:31] will retry after 596.303035ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.407231   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:51.407257   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:51.407264   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:51.407271   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:51.407275   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:51.407292   46387 retry.go:31] will retry after 688.903723ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.102330   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.102357   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.102364   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.102374   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.102382   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.102399   46387 retry.go:31] will retry after 817.783297ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.925586   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.925612   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.925620   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.925629   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.925636   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.925658   46387 retry.go:31] will retry after 797.004884ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:53.728788   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:53.728812   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:53.728817   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:53.728823   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:53.728827   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:53.728843   46387 retry.go:31] will retry after 1.021568746s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.528236   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:43:51.533236   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:43:51.534697   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:43:51.534714   46388 api_server.go:131] duration metric: took 3.932606059s to wait for apiserver health ...
	I0115 10:43:51.534721   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:51.534744   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:51.534796   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:51.571704   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.571730   46388 cri.go:89] found id: ""
	I0115 10:43:51.571740   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:51.571793   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.576140   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:51.576201   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:51.614720   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:51.614803   46388 cri.go:89] found id: ""
	I0115 10:43:51.614823   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:51.614909   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.620904   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:51.620966   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:51.659679   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.659711   46388 cri.go:89] found id: ""
	I0115 10:43:51.659721   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:51.659779   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.664223   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:51.664275   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:51.701827   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:51.701850   46388 cri.go:89] found id: ""
	I0115 10:43:51.701858   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:51.701915   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.707296   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:51.707354   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:51.745962   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:51.745989   46388 cri.go:89] found id: ""
	I0115 10:43:51.746006   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:51.746061   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.750872   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:51.750942   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:51.796600   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:51.796637   46388 cri.go:89] found id: ""
	I0115 10:43:51.796647   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:51.796697   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.801250   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:51.801321   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:51.845050   46388 cri.go:89] found id: ""
	I0115 10:43:51.845072   46388 logs.go:284] 0 containers: []
	W0115 10:43:51.845081   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:51.845087   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:51.845144   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:51.880907   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:51.880935   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:51.880942   46388 cri.go:89] found id: ""
	I0115 10:43:51.880951   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:51.880997   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.885202   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.889086   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:51.889108   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.939740   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:51.939770   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.977039   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:51.977068   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:52.024927   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:52.024960   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:52.071850   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:52.071882   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:52.123313   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:52.123343   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:52.137274   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:52.137297   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:52.260488   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:52.260525   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:52.301121   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:52.301156   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:52.346323   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:52.346349   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:52.402759   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:52.402788   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:52.457075   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:52.457103   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:52.811321   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:52.811359   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:55.374293   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:55.374327   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.374335   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.374342   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.374348   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.374354   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.374361   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.374371   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.374382   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.374394   46388 system_pods.go:74] duration metric: took 3.83966542s to wait for pod list to return data ...
	I0115 10:43:55.374407   46388 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:55.376812   46388 default_sa.go:45] found service account: "default"
	I0115 10:43:55.376833   46388 default_sa.go:55] duration metric: took 2.418755ms for default service account to be created ...
	I0115 10:43:55.376843   46388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:55.383202   46388 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:55.383227   46388 system_pods.go:89] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.383236   46388 system_pods.go:89] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.383244   46388 system_pods.go:89] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.383285   46388 system_pods.go:89] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.383297   46388 system_pods.go:89] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.383303   46388 system_pods.go:89] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.383314   46388 system_pods.go:89] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.383325   46388 system_pods.go:89] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.383338   46388 system_pods.go:126] duration metric: took 6.489813ms to wait for k8s-apps to be running ...
	I0115 10:43:55.383349   46388 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:55.383401   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:55.399074   46388 system_svc.go:56] duration metric: took 15.719638ms WaitForService to wait for kubelet.
	I0115 10:43:55.399096   46388 kubeadm.go:581] duration metric: took 4m22.589439448s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:55.399118   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:55.403855   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:55.403883   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:55.403896   46388 node_conditions.go:105] duration metric: took 4.771651ms to run NodePressure ...
	I0115 10:43:55.403908   46388 start.go:228] waiting for startup goroutines ...
	I0115 10:43:55.403917   46388 start.go:233] waiting for cluster config update ...
	I0115 10:43:55.403930   46388 start.go:242] writing updated cluster config ...
	I0115 10:43:55.404244   46388 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:55.453146   46388 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0115 10:43:55.455321   46388 out.go:177] * Done! kubectl is now configured to use "no-preload-824502" cluster and "default" namespace by default
	I0115 10:43:54.756077   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:54.756099   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:54.756104   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:54.756111   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:54.756116   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:54.756131   46387 retry.go:31] will retry after 1.152306172s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:55.913769   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:55.913792   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:55.913798   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:55.913804   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.913810   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:55.913826   46387 retry.go:31] will retry after 2.261296506s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:58.179679   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:58.179704   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:58.179710   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:58.179718   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:58.179722   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:58.179739   46387 retry.go:31] will retry after 2.012023518s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:00.197441   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:00.197471   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:00.197476   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:00.197483   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:00.197487   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:00.197505   46387 retry.go:31] will retry after 3.341619522s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:03.543730   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:03.543752   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:03.543757   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:03.543766   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:03.543771   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:03.543788   46387 retry.go:31] will retry after 2.782711895s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:06.332250   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:06.332276   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:06.332281   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:06.332288   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:06.332294   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:06.332310   46387 retry.go:31] will retry after 5.379935092s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:11.718269   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:11.718315   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:11.718324   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:11.718334   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:11.718343   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:11.718364   46387 retry.go:31] will retry after 6.238812519s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:17.963126   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:17.963150   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:17.963155   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:17.963162   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:17.963167   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:17.963183   46387 retry.go:31] will retry after 7.774120416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:25.743164   46387 system_pods.go:86] 6 kube-system pods found
	I0115 10:44:25.743190   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:25.743196   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:25.743200   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:25.743204   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:25.743210   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:25.743214   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:25.743231   46387 retry.go:31] will retry after 8.584433466s: missing components: kube-apiserver, kube-scheduler
	I0115 10:44:34.335720   46387 system_pods.go:86] 7 kube-system pods found
	I0115 10:44:34.335751   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:34.335759   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:34.335777   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:34.335785   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:34.335793   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:34.335801   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:34.335815   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:34.335834   46387 retry.go:31] will retry after 13.073630932s: missing components: kube-apiserver
	I0115 10:44:47.415277   46387 system_pods.go:86] 8 kube-system pods found
	I0115 10:44:47.415304   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:47.415311   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:47.415318   46387 system_pods.go:89] "kube-apiserver-old-k8s-version-206509" [e708ba3e-5deb-4b60-ab5b-52c4d671fa46] Running
	I0115 10:44:47.415326   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:47.415332   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:47.415339   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:47.415349   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:47.415355   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:47.415371   46387 system_pods.go:126] duration metric: took 57.64651504s to wait for k8s-apps to be running ...
	I0115 10:44:47.415382   46387 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:44:47.415444   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:44:47.433128   46387 system_svc.go:56] duration metric: took 17.740925ms WaitForService to wait for kubelet.
	I0115 10:44:47.433150   46387 kubeadm.go:581] duration metric: took 1m5.827285253s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:44:47.433174   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:44:47.435664   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:44:47.435685   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:44:47.435695   46387 node_conditions.go:105] duration metric: took 2.516113ms to run NodePressure ...
	I0115 10:44:47.435708   46387 start.go:228] waiting for startup goroutines ...
	I0115 10:44:47.435716   46387 start.go:233] waiting for cluster config update ...
	I0115 10:44:47.435728   46387 start.go:242] writing updated cluster config ...
	I0115 10:44:47.436091   46387 ssh_runner.go:195] Run: rm -f paused
	I0115 10:44:47.492053   46387 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0115 10:44:47.494269   46387 out.go:177] 
	W0115 10:44:47.495828   46387 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0115 10:44:47.497453   46387 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0115 10:44:47.498880   46387 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-206509" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 10:38:43 UTC, ends at Mon 2024-01-15 10:52:57 UTC. --
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.216457564Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=a4eab27d-9cf5-475c-81ad-d0491c2a397e name=/runtime.v1.ImageService/ImageStatus
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.217487026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d017b6d9-7890-4814-9c15-726ab767ac88 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.217533118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d017b6d9-7890-4814-9c15-726ab767ac88 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.217888275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705315202418432756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c658cb24796b95a8cdf4a506b265e1066ce03f741a8959b8a127df9c10370b1,PodSandboxId:d4e64526313335437437695aeaa86e72116b6e60fb962261f3cb3c8a5410465e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315179852234581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1219dd-88d2-4145-bdfe-b716393e8b47,},Annotations:map[string]string{io.kubernetes.container.hash: d4872e5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b,PodSandboxId:33e649279b1e7e2601085bf8a8c4b29c51f102075bdaa0685f7b86230144591b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705315178538095397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ft2wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217729a7-bdfa-452f-8df4-5a9694ad2f02,},Annotations:map[string]string{io.kubernetes.container.hash: 13f696c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2,PodSandboxId:568ce00da390ab1b10d4120ea4334dc03b3219718e53652f9b939e38289aa5ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705315171188029491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7aa7c9c-df
52-4073-a603-b283d123a230,},Annotations:map[string]string{io.kubernetes.container.hash: 9064c25e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f,PodSandboxId:4b798a6d56bf7c0110d82f442a0dedc06335e8cf29c2d62226b8ac319cb71070,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705315164680400560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe902eda49d681254c2ad8c6e52376dd,},Annotations:map[string]
string{io.kubernetes.container.hash: 3eaaa882,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563,PodSandboxId:858c56ed12a283b930bc4434200d90e3d464cebdc3dc9766fa3470737e54e5bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705315164755646862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e9738fcb57d7e53d2a1c6d319c93db,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751,PodSandboxId:519d9ce32cc3c611e832afe27264ba3b5cd25f59e6b6d51c6b51d289e2d97ebf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705315164485696611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57ecfa2b1aac56d5c4a0f01bdad34f4,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8dc2e508,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6,PodSandboxId:baab48d4ddcef816e51c3455dfceb43c4d1d225b841266006dd55a66d7cdfddf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705315164264492139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d392d382c8dfcc2c2f98a184d7efd663,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d017b6d9-7890-4814-9c15-726ab767ac88 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.231213297Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0dc621c5-1bac-466a-bf83-da30654214f6 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.231275111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0dc621c5-1bac-466a-bf83-da30654214f6 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.232593334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=793f0975-4633-4926-928e-399b1a806e7f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.233022217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705315977233008749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=793f0975-4633-4926-928e-399b1a806e7f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.233563997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0c7a9c5b-4fe8-4192-8016-b2597bb47a24 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.233653052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0c7a9c5b-4fe8-4192-8016-b2597bb47a24 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.234023050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705315202418432756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c658cb24796b95a8cdf4a506b265e1066ce03f741a8959b8a127df9c10370b1,PodSandboxId:d4e64526313335437437695aeaa86e72116b6e60fb962261f3cb3c8a5410465e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315179852234581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1219dd-88d2-4145-bdfe-b716393e8b47,},Annotations:map[string]string{io.kubernetes.container.hash: d4872e5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b,PodSandboxId:33e649279b1e7e2601085bf8a8c4b29c51f102075bdaa0685f7b86230144591b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705315178538095397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ft2wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217729a7-bdfa-452f-8df4-5a9694ad2f02,},Annotations:map[string]string{io.kubernetes.container.hash: 13f696c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2,PodSandboxId:568ce00da390ab1b10d4120ea4334dc03b3219718e53652f9b939e38289aa5ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705315171188029491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7aa7c9c-df
52-4073-a603-b283d123a230,},Annotations:map[string]string{io.kubernetes.container.hash: 9064c25e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705315171163870484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0
-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f,PodSandboxId:4b798a6d56bf7c0110d82f442a0dedc06335e8cf29c2d62226b8ac319cb71070,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705315164680400560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe902eda49d681254c2ad8c6e52376dd,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3eaaa882,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563,PodSandboxId:858c56ed12a283b930bc4434200d90e3d464cebdc3dc9766fa3470737e54e5bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705315164755646862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e9738fcb57d7e53d2a1c6d319c93db,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751,PodSandboxId:519d9ce32cc3c611e832afe27264ba3b5cd25f59e6b6d51c6b51d289e2d97ebf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705315164485696611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57ecfa2b1aac56d5c4a0f01bdad34f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8dc2e508,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6,PodSandboxId:baab48d4ddcef816e51c3455dfceb43c4d1d225b841266006dd55a66d7cdfddf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705315164264492139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d392d382c8dfcc2c2f98a184d7efd663,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0c7a9c5b-4fe8-4192-8016-b2597bb47a24 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.275186742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fe79e7de-2a50-4dab-9f12-8a7b70e76c05 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.275241535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fe79e7de-2a50-4dab-9f12-8a7b70e76c05 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.276656523Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f8e81a64-c6c7-46a8-8af0-80994ec8830b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.277280184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705315977277260742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=f8e81a64-c6c7-46a8-8af0-80994ec8830b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.278230737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f5c62e8b-d45b-4296-b022-025b518ceb4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.278284051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f5c62e8b-d45b-4296-b022-025b518ceb4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.278494610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705315202418432756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c658cb24796b95a8cdf4a506b265e1066ce03f741a8959b8a127df9c10370b1,PodSandboxId:d4e64526313335437437695aeaa86e72116b6e60fb962261f3cb3c8a5410465e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315179852234581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1219dd-88d2-4145-bdfe-b716393e8b47,},Annotations:map[string]string{io.kubernetes.container.hash: d4872e5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b,PodSandboxId:33e649279b1e7e2601085bf8a8c4b29c51f102075bdaa0685f7b86230144591b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705315178538095397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ft2wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217729a7-bdfa-452f-8df4-5a9694ad2f02,},Annotations:map[string]string{io.kubernetes.container.hash: 13f696c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2,PodSandboxId:568ce00da390ab1b10d4120ea4334dc03b3219718e53652f9b939e38289aa5ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705315171188029491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7aa7c9c-df
52-4073-a603-b283d123a230,},Annotations:map[string]string{io.kubernetes.container.hash: 9064c25e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705315171163870484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0
-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f,PodSandboxId:4b798a6d56bf7c0110d82f442a0dedc06335e8cf29c2d62226b8ac319cb71070,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705315164680400560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe902eda49d681254c2ad8c6e52376dd,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3eaaa882,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563,PodSandboxId:858c56ed12a283b930bc4434200d90e3d464cebdc3dc9766fa3470737e54e5bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705315164755646862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e9738fcb57d7e53d2a1c6d319c93db,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751,PodSandboxId:519d9ce32cc3c611e832afe27264ba3b5cd25f59e6b6d51c6b51d289e2d97ebf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705315164485696611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57ecfa2b1aac56d5c4a0f01bdad34f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8dc2e508,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6,PodSandboxId:baab48d4ddcef816e51c3455dfceb43c4d1d225b841266006dd55a66d7cdfddf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705315164264492139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d392d382c8dfcc2c2f98a184d7efd663,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f5c62e8b-d45b-4296-b022-025b518ceb4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.319089135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d6d6fa5d-0ff0-4456-8704-d8febdeedd4a name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.319146066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d6d6fa5d-0ff0-4456-8704-d8febdeedd4a name=/runtime.v1.RuntimeService/Version
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.320392394Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e129b48d-32c2-4957-8dbd-232cb480acfb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.320869829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705315977320765855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=e129b48d-32c2-4957-8dbd-232cb480acfb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.321610637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9971fd41-bc1f-4144-abcb-d61ebb81c72a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.321684192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9971fd41-bc1f-4144-abcb-d61ebb81c72a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:52:57 no-preload-824502 crio[729]: time="2024-01-15 10:52:57.321939076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705315202418432756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c658cb24796b95a8cdf4a506b265e1066ce03f741a8959b8a127df9c10370b1,PodSandboxId:d4e64526313335437437695aeaa86e72116b6e60fb962261f3cb3c8a5410465e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315179852234581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1219dd-88d2-4145-bdfe-b716393e8b47,},Annotations:map[string]string{io.kubernetes.container.hash: d4872e5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b,PodSandboxId:33e649279b1e7e2601085bf8a8c4b29c51f102075bdaa0685f7b86230144591b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705315178538095397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ft2wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217729a7-bdfa-452f-8df4-5a9694ad2f02,},Annotations:map[string]string{io.kubernetes.container.hash: 13f696c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2,PodSandboxId:568ce00da390ab1b10d4120ea4334dc03b3219718e53652f9b939e38289aa5ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705315171188029491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7aa7c9c-df
52-4073-a603-b283d123a230,},Annotations:map[string]string{io.kubernetes.container.hash: 9064c25e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705315171163870484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0
-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f,PodSandboxId:4b798a6d56bf7c0110d82f442a0dedc06335e8cf29c2d62226b8ac319cb71070,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705315164680400560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe902eda49d681254c2ad8c6e52376dd,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3eaaa882,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563,PodSandboxId:858c56ed12a283b930bc4434200d90e3d464cebdc3dc9766fa3470737e54e5bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705315164755646862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e9738fcb57d7e53d2a1c6d319c93db,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751,PodSandboxId:519d9ce32cc3c611e832afe27264ba3b5cd25f59e6b6d51c6b51d289e2d97ebf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705315164485696611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57ecfa2b1aac56d5c4a0f01bdad34f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8dc2e508,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6,PodSandboxId:baab48d4ddcef816e51c3455dfceb43c4d1d225b841266006dd55a66d7cdfddf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705315164264492139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d392d382c8dfcc2c2f98a184d7efd663,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9971fd41-bc1f-4144-abcb-d61ebb81c72a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	559a40ec4f19b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   8570c1add8152       storage-provisioner
	1c658cb24796b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d4e6452631333       busybox
	014ec3fd018c5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   33e649279b1e7       coredns-76f75df574-ft2wt
	d1d6c3b6e1b4e       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   568ce00da390a       kube-proxy-nlk2h
	9d1cf90048e83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   8570c1add8152       storage-provisioner
	c382ae3f75656       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   858c56ed12a28       kube-scheduler-no-preload-824502
	0a1fe00474627       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   4b798a6d56bf7       etcd-no-preload-824502
	04397ad49a123       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   519d9ce32cc3c       kube-apiserver-no-preload-824502
	aea55e3208ce8       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   baab48d4ddcef       kube-controller-manager-no-preload-824502
	
	
	==> coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57947 - 27121 "HINFO IN 3732147130076988560.2592678263682650894. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029725305s
	
	
	==> describe nodes <==
	Name:               no-preload-824502
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-824502
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=no-preload-824502
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T10_29_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:29:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-824502
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 10:52:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:50:12 +0000   Mon, 15 Jan 2024 10:29:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:50:12 +0000   Mon, 15 Jan 2024 10:29:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:50:12 +0000   Mon, 15 Jan 2024 10:29:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:50:12 +0000   Mon, 15 Jan 2024 10:39:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.136
	  Hostname:    no-preload-824502
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 ed3417f43fb042f283634814d5ef2c19
	  System UUID:                ed3417f4-3fb0-42f2-8363-4814d5ef2c19
	  Boot ID:                    af76b30a-85fa-4e0a-abf3-71edc5159ff3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-76f75df574-ft2wt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-no-preload-824502                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-824502             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-824502    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-nlk2h                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-no-preload-824502             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-57f55c9bc5-6tcwm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-824502 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-824502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-824502 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeReady                23m                kubelet          Node no-preload-824502 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-824502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-824502 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-824502 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-824502 event: Registered Node no-preload-824502 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-824502 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-824502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-824502 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-824502 event: Registered Node no-preload-824502 in Controller
	
	
	==> dmesg <==
	[Jan15 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070320] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.814482] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.583066] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143889] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.479804] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.085547] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.138900] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.155329] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.107687] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[  +0.248642] systemd-fstab-generator[714]: Ignoring "noauto" for root device
	[Jan15 10:39] systemd-fstab-generator[1348]: Ignoring "noauto" for root device
	[ +15.081212] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] <==
	{"level":"info","ts":"2024-01-15T10:39:26.502489Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-15T10:39:26.503157Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"736953c025287a25","local-member-id":"247e73b5d65300e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T10:39:26.50341Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T10:39:26.524852Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.136:2380"}
	{"level":"info","ts":"2024-01-15T10:39:26.524928Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.136:2380"}
	{"level":"info","ts":"2024-01-15T10:39:26.520538Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-15T10:39:26.525538Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"247e73b5d65300e1","initial-advertise-peer-urls":["https://192.168.50.136:2380"],"listen-peer-urls":["https://192.168.50.136:2380"],"advertise-client-urls":["https://192.168.50.136:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.136:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-15T10:39:26.529529Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-15T10:39:28.07387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-15T10:39:28.073966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-15T10:39:28.074024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 received MsgPreVoteResp from 247e73b5d65300e1 at term 2"}
	{"level":"info","ts":"2024-01-15T10:39:28.074043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 became candidate at term 3"}
	{"level":"info","ts":"2024-01-15T10:39:28.074049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 received MsgVoteResp from 247e73b5d65300e1 at term 3"}
	{"level":"info","ts":"2024-01-15T10:39:28.074058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 became leader at term 3"}
	{"level":"info","ts":"2024-01-15T10:39:28.074065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 247e73b5d65300e1 elected leader 247e73b5d65300e1 at term 3"}
	{"level":"info","ts":"2024-01-15T10:39:28.075932Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"247e73b5d65300e1","local-member-attributes":"{Name:no-preload-824502 ClientURLs:[https://192.168.50.136:2379]}","request-path":"/0/members/247e73b5d65300e1/attributes","cluster-id":"736953c025287a25","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T10:39:28.075983Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T10:39:28.075954Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T10:39:28.078135Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.136:2379"}
	{"level":"info","ts":"2024-01-15T10:39:28.078365Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T10:39:28.078683Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T10:39:28.078721Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-15T10:49:28.106571Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":854}
	{"level":"info","ts":"2024-01-15T10:49:28.109774Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":854,"took":"2.338955ms","hash":1953090962}
	{"level":"info","ts":"2024-01-15T10:49:28.109998Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1953090962,"revision":854,"compact-revision":-1}
	
	
	==> kernel <==
	 10:52:57 up 14 min,  0 users,  load average: 0.17, 0.17, 0.16
	Linux no-preload-824502 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] <==
	I0115 10:47:30.611916       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:49:29.613701       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:49:29.614148       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0115 10:49:30.615221       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:49:30.615434       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:49:30.615499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:49:30.615227       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:49:30.615646       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:49:30.617003       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:50:30.616749       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:50:30.616973       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:50:30.616984       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:50:30.618214       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:50:30.618267       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:50:30.618276       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:52:30.617094       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:52:30.617443       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:52:30.617482       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:52:30.618726       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:52:30.618889       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:52:30.618950       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] <==
	I0115 10:47:13.423300       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:47:42.887706       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:47:43.435091       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:48:12.893186       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:48:13.448863       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:48:42.899157       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:48:43.458931       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:49:12.904673       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:49:13.469382       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:49:42.909837       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:49:43.478126       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:50:12.916444       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:50:13.487432       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0115 10:50:41.234994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="466.877µs"
	E0115 10:50:42.922470       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:50:43.495893       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0115 10:50:55.242340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="367.153µs"
	E0115 10:51:12.928400       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:51:13.505311       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:51:42.934345       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:51:43.517088       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:52:12.939903       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:52:13.525118       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:52:42.946104       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:52:43.533704       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] <==
	I0115 10:39:31.713743       1 server_others.go:72] "Using iptables proxy"
	I0115 10:39:31.739400       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.136"]
	I0115 10:39:31.801266       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0115 10:39:31.801354       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 10:39:31.801392       1 server_others.go:168] "Using iptables Proxier"
	I0115 10:39:31.804862       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 10:39:31.805071       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0115 10:39:31.805122       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:39:31.808052       1 config.go:188] "Starting service config controller"
	I0115 10:39:31.808198       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 10:39:31.808300       1 config.go:97] "Starting endpoint slice config controller"
	I0115 10:39:31.808327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 10:39:31.811355       1 config.go:315] "Starting node config controller"
	I0115 10:39:31.811475       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 10:39:31.908532       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 10:39:31.908949       1 shared_informer.go:318] Caches are synced for service config
	I0115 10:39:31.912488       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] <==
	I0115 10:39:26.806562       1 serving.go:380] Generated self-signed cert in-memory
	W0115 10:39:29.562316       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0115 10:39:29.562371       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 10:39:29.562381       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0115 10:39:29.562387       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0115 10:39:29.620685       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0115 10:39:29.621071       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:39:29.623162       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0115 10:39:29.623245       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 10:39:29.624399       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0115 10:39:29.624568       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0115 10:39:29.723865       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 10:38:43 UTC, ends at Mon 2024-01-15 10:52:57 UTC. --
	Jan 15 10:50:23 no-preload-824502 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:50:23 no-preload-824502 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:50:27 no-preload-824502 kubelet[1354]: E0115 10:50:27.256408    1354 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 15 10:50:27 no-preload-824502 kubelet[1354]: E0115 10:50:27.256464    1354 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 15 10:50:27 no-preload-824502 kubelet[1354]: E0115 10:50:27.256662    1354 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bn4mh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-6tcwm_kube-system(1815c2ae-e5ce-4c79-9fd9-79b28c2c6780): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 10:50:27 no-preload-824502 kubelet[1354]: E0115 10:50:27.256707    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:50:41 no-preload-824502 kubelet[1354]: E0115 10:50:41.217245    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:50:55 no-preload-824502 kubelet[1354]: E0115 10:50:55.218279    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:51:09 no-preload-824502 kubelet[1354]: E0115 10:51:09.216453    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:51:23 no-preload-824502 kubelet[1354]: E0115 10:51:23.216326    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:51:23 no-preload-824502 kubelet[1354]: E0115 10:51:23.234146    1354 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:51:23 no-preload-824502 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:51:23 no-preload-824502 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:51:23 no-preload-824502 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:51:37 no-preload-824502 kubelet[1354]: E0115 10:51:37.217129    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:51:50 no-preload-824502 kubelet[1354]: E0115 10:51:50.216161    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:52:04 no-preload-824502 kubelet[1354]: E0115 10:52:04.216134    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:52:16 no-preload-824502 kubelet[1354]: E0115 10:52:16.216340    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:52:23 no-preload-824502 kubelet[1354]: E0115 10:52:23.235603    1354 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:52:23 no-preload-824502 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:52:23 no-preload-824502 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:52:23 no-preload-824502 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:52:27 no-preload-824502 kubelet[1354]: E0115 10:52:27.220013    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:52:42 no-preload-824502 kubelet[1354]: E0115 10:52:42.216284    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:52:57 no-preload-824502 kubelet[1354]: E0115 10:52:57.217120    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	
	
	==> storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] <==
	I0115 10:40:02.552269       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 10:40:02.571460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 10:40:02.572271       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 10:40:19.981115       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 10:40:19.981710       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-824502_7d20f209-d460-4749-900a-e7a118d3bbea!
	I0115 10:40:19.983394       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f9e99c98-1144-4bc5-bfe0-057dc2bb715e", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-824502_7d20f209-d460-4749-900a-e7a118d3bbea became leader
	I0115 10:40:20.084175       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-824502_7d20f209-d460-4749-900a-e7a118d3bbea!
	
	
	==> storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] <==
	I0115 10:39:31.554605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0115 10:40:01.567540       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-824502 -n no-preload-824502
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-824502 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6tcwm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-824502 describe pod metrics-server-57f55c9bc5-6tcwm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-824502 describe pod metrics-server-57f55c9bc5-6tcwm: exit status 1 (63.568612ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6tcwm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-824502 describe pod metrics-server-57f55c9bc5-6tcwm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0115 10:45:44.503613   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 10:46:39.519034   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 10:49:12.884164   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:49:21.452350   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 10:50:35.935735   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:51:39.519684   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-206509 -n old-k8s-version-206509
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-15 10:53:48.076707916 +0000 UTC m=+5240.047875327
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206509 -n old-k8s-version-206509
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-206509 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-206509 logs -n 25: (1.668311198s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-967423 -- sudo                         | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-967423                                 | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-317803                           | kubernetes-upgrade-317803    | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-824502             | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-206509        | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-781270            | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-802186 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | disable-driver-mounts-802186                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:32 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-709012  | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-206509             | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-824502                  | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-781270                 | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:33 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-709012       | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC | 15 Jan 24 10:43 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:34:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:34:59.863813   47063 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:34:59.864093   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864103   47063 out.go:309] Setting ErrFile to fd 2...
	I0115 10:34:59.864108   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864345   47063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:34:59.864916   47063 out.go:303] Setting JSON to false
	I0115 10:34:59.865821   47063 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4600,"bootTime":1705310300,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 10:34:59.865878   47063 start.go:138] virtualization: kvm guest
	I0115 10:34:59.868392   47063 out.go:177] * [default-k8s-diff-port-709012] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 10:34:59.869886   47063 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 10:34:59.869920   47063 notify.go:220] Checking for updates...
	I0115 10:34:59.871289   47063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:34:59.872699   47063 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:34:59.874242   47063 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:34:59.875739   47063 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 10:34:59.877248   47063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 10:34:59.879143   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:34:59.879618   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.879682   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.893745   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I0115 10:34:59.894091   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.894610   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.894633   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.894933   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.895112   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.895305   47063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:34:59.895579   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.895611   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.909045   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0115 10:34:59.909415   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.909868   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.909886   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.910173   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.910346   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.943453   47063 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 10:34:59.945154   47063 start.go:298] selected driver: kvm2
	I0115 10:34:59.945164   47063 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.945252   47063 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 10:34:59.945926   47063 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.945991   47063 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 10:34:59.959656   47063 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 10:34:59.960028   47063 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 10:34:59.960078   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:34:59.960091   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:34:59.960106   47063 start_flags.go:321] config:
	{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-70901
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.960261   47063 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.962534   47063 out.go:177] * Starting control plane node default-k8s-diff-port-709012 in cluster default-k8s-diff-port-709012
	I0115 10:35:00.734685   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:34:59.963970   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:34:59.964003   47063 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 10:34:59.964012   47063 cache.go:56] Caching tarball of preloaded images
	I0115 10:34:59.964081   47063 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 10:34:59.964090   47063 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:34:59.964172   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:34:59.964356   47063 start.go:365] acquiring machines lock for default-k8s-diff-port-709012: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:35:06.814638   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:09.886665   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:15.966704   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:19.038663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:25.118649   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:28.190674   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:34.270660   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:37.342618   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:43.422663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:46.494729   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:52.574698   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:55.646737   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:01.726677   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:04.798681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:10.878645   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:13.950716   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:20.030691   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:23.102681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:29.182668   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:32.254641   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:38.334686   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:41.406690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:47.486639   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:50.558690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:56.638684   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:59.710581   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:05.790664   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:08.862738   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:14.942615   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:18.014720   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:24.094644   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:27.098209   46387 start.go:369] acquired machines lock for "old-k8s-version-206509" in 4m37.373222591s
	I0115 10:37:27.098259   46387 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:27.098264   46387 fix.go:54] fixHost starting: 
	I0115 10:37:27.098603   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:27.098633   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:27.112818   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37153
	I0115 10:37:27.113206   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:27.113638   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:37:27.113660   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:27.113943   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:27.114126   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:27.114270   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:37:27.115824   46387 fix.go:102] recreateIfNeeded on old-k8s-version-206509: state=Stopped err=<nil>
	I0115 10:37:27.115846   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	W0115 10:37:27.116007   46387 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:27.118584   46387 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-206509" ...
	I0115 10:37:27.119985   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Start
	I0115 10:37:27.120145   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring networks are active...
	I0115 10:37:27.120788   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network default is active
	I0115 10:37:27.121077   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network mk-old-k8s-version-206509 is active
	I0115 10:37:27.121463   46387 main.go:141] libmachine: (old-k8s-version-206509) Getting domain xml...
	I0115 10:37:27.122185   46387 main.go:141] libmachine: (old-k8s-version-206509) Creating domain...
	I0115 10:37:28.295990   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting to get IP...
	I0115 10:37:28.297038   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.297393   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.297470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.297380   47440 retry.go:31] will retry after 254.616903ms: waiting for machine to come up
	I0115 10:37:28.553730   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.554213   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.554238   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.554159   47440 retry.go:31] will retry after 350.995955ms: waiting for machine to come up
	I0115 10:37:28.906750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.907189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.907222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.907146   47440 retry.go:31] will retry after 441.292217ms: waiting for machine to come up
	I0115 10:37:29.349643   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.350011   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.350042   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.349959   47440 retry.go:31] will retry after 544.431106ms: waiting for machine to come up
	I0115 10:37:27.096269   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:27.096303   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:37:27.098084   46388 machine.go:91] provisioned docker machine in 4m37.366643974s
	I0115 10:37:27.098120   46388 fix.go:56] fixHost completed within 4m37.388460167s
	I0115 10:37:27.098126   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 4m37.388479036s
	W0115 10:37:27.098153   46388 start.go:694] error starting host: provision: host is not running
	W0115 10:37:27.098242   46388 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0115 10:37:27.098252   46388 start.go:709] Will try again in 5 seconds ...
	I0115 10:37:29.895609   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.896157   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.896189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.896032   47440 retry.go:31] will retry after 489.420436ms: waiting for machine to come up
	I0115 10:37:30.386614   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:30.387037   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:30.387071   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:30.387005   47440 retry.go:31] will retry after 779.227065ms: waiting for machine to come up
	I0115 10:37:31.167934   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:31.168316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:31.168343   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:31.168273   47440 retry.go:31] will retry after 878.328646ms: waiting for machine to come up
	I0115 10:37:32.048590   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:32.048976   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:32.049001   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:32.048920   47440 retry.go:31] will retry after 1.282650862s: waiting for machine to come up
	I0115 10:37:33.333699   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:33.334132   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:33.334161   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:33.334078   47440 retry.go:31] will retry after 1.548948038s: waiting for machine to come up
	I0115 10:37:32.100253   46388 start.go:365] acquiring machines lock for no-preload-824502: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:37:34.884455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:34.884845   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:34.884866   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:34.884800   47440 retry.go:31] will retry after 1.555315627s: waiting for machine to come up
	I0115 10:37:36.441833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:36.442329   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:36.442352   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:36.442281   47440 retry.go:31] will retry after 1.803564402s: waiting for machine to come up
	I0115 10:37:38.247833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:38.248241   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:38.248283   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:38.248213   47440 retry.go:31] will retry after 3.514521425s: waiting for machine to come up
	I0115 10:37:41.766883   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:41.767187   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:41.767222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:41.767154   47440 retry.go:31] will retry after 4.349871716s: waiting for machine to come up
	I0115 10:37:47.571869   46584 start.go:369] acquired machines lock for "embed-certs-781270" in 4m40.757219204s
	I0115 10:37:47.571928   46584 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:47.571936   46584 fix.go:54] fixHost starting: 
	I0115 10:37:47.572344   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:47.572382   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:47.591532   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0115 10:37:47.591905   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:47.592471   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:37:47.592513   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:47.592835   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:47.593060   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:37:47.593221   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:37:47.594825   46584 fix.go:102] recreateIfNeeded on embed-certs-781270: state=Stopped err=<nil>
	I0115 10:37:47.594856   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	W0115 10:37:47.595015   46584 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:47.597457   46584 out.go:177] * Restarting existing kvm2 VM for "embed-certs-781270" ...
	I0115 10:37:46.118479   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.118936   46387 main.go:141] libmachine: (old-k8s-version-206509) Found IP for machine: 192.168.61.70
	I0115 10:37:46.118960   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserving static IP address...
	I0115 10:37:46.118978   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has current primary IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.119402   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.119425   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserved static IP address: 192.168.61.70
	I0115 10:37:46.119441   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | skip adding static IP to network mk-old-k8s-version-206509 - found existing host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"}
	I0115 10:37:46.119455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Getting to WaitForSSH function...
	I0115 10:37:46.119467   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting for SSH to be available...
	I0115 10:37:46.121874   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122204   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.122236   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122340   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH client type: external
	I0115 10:37:46.122364   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa (-rw-------)
	I0115 10:37:46.122452   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:37:46.122476   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | About to run SSH command:
	I0115 10:37:46.122492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | exit 0
	I0115 10:37:46.214102   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | SSH cmd err, output: <nil>: 
	I0115 10:37:46.214482   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetConfigRaw
	I0115 10:37:46.215064   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.217294   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217579   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.217618   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217784   46387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/config.json ...
	I0115 10:37:46.218001   46387 machine.go:88] provisioning docker machine ...
	I0115 10:37:46.218022   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:46.218242   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218440   46387 buildroot.go:166] provisioning hostname "old-k8s-version-206509"
	I0115 10:37:46.218462   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218593   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.220842   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221188   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.221226   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221374   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.221525   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221662   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221760   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.221905   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.222391   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.222411   46387 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-206509 && echo "old-k8s-version-206509" | sudo tee /etc/hostname
	I0115 10:37:46.354906   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-206509
	
	I0115 10:37:46.354939   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.357679   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358051   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.358089   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358245   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.358470   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358642   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358799   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.358957   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.359291   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.359318   46387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-206509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-206509/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-206509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:37:46.491369   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:46.491397   46387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:37:46.491413   46387 buildroot.go:174] setting up certificates
	I0115 10:37:46.491422   46387 provision.go:83] configureAuth start
	I0115 10:37:46.491430   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.491687   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.494369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.494779   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494863   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.496985   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497338   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.497368   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497537   46387 provision.go:138] copyHostCerts
	I0115 10:37:46.497598   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:37:46.497613   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:37:46.497694   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:37:46.497806   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:37:46.497818   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:37:46.497848   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:37:46.497925   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:37:46.497945   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:37:46.497982   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:37:46.498043   46387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-206509 san=[192.168.61.70 192.168.61.70 localhost 127.0.0.1 minikube old-k8s-version-206509]
	I0115 10:37:46.824648   46387 provision.go:172] copyRemoteCerts
	I0115 10:37:46.824702   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:37:46.824723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.827470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827785   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.827818   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827972   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.828174   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.828336   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.828484   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:46.919822   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:37:46.941728   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:37:46.963042   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0115 10:37:46.983757   46387 provision.go:86] duration metric: configureAuth took 492.325875ms
	I0115 10:37:46.983777   46387 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:37:46.983966   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:37:46.984048   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.986525   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.986843   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.986869   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.987107   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.987323   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987503   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987651   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.987795   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.988198   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.988219   46387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:37:47.308225   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:37:47.308256   46387 machine.go:91] provisioned docker machine in 1.090242192s
	I0115 10:37:47.308269   46387 start.go:300] post-start starting for "old-k8s-version-206509" (driver="kvm2")
	I0115 10:37:47.308284   46387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:37:47.308310   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.308641   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:37:47.308674   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.311316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311665   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.311700   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311835   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.312024   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.312190   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.312315   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.407169   46387 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:37:47.411485   46387 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:37:47.411504   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:37:47.411566   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:37:47.411637   46387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:37:47.411715   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:37:47.419976   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:47.446992   46387 start.go:303] post-start completed in 138.700951ms
	I0115 10:37:47.447013   46387 fix.go:56] fixHost completed within 20.348748891s
	I0115 10:37:47.447031   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.449638   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.449996   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.450048   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.450136   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.450309   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450620   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.450749   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:47.451070   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:47.451085   46387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:37:47.571711   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315067.520557177
	
	I0115 10:37:47.571729   46387 fix.go:206] guest clock: 1705315067.520557177
	I0115 10:37:47.571748   46387 fix.go:219] Guest: 2024-01-15 10:37:47.520557177 +0000 UTC Remote: 2024-01-15 10:37:47.447016864 +0000 UTC m=+297.904172196 (delta=73.540313ms)
	I0115 10:37:47.571772   46387 fix.go:190] guest clock delta is within tolerance: 73.540313ms
	I0115 10:37:47.571782   46387 start.go:83] releasing machines lock for "old-k8s-version-206509", held for 20.473537585s
	I0115 10:37:47.571810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.572157   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:47.574952   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575328   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.575366   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.575957   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576146   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576232   46387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:37:47.576273   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.576381   46387 ssh_runner.go:195] Run: cat /version.json
	I0115 10:37:47.576406   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.578863   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579052   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579218   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579248   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579347   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579378   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579385   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579577   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579583   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579775   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.579810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579912   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.580094   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.580316   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.702555   46387 ssh_runner.go:195] Run: systemctl --version
	I0115 10:37:47.708309   46387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:37:47.862103   46387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:37:47.869243   46387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:37:47.869321   46387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:37:47.886013   46387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:37:47.886033   46387 start.go:475] detecting cgroup driver to use...
	I0115 10:37:47.886093   46387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:37:47.901265   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:37:47.913762   46387 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:37:47.913815   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:37:47.926880   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:37:47.942744   46387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:37:48.050667   46387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:37:48.168614   46387 docker.go:233] disabling docker service ...
	I0115 10:37:48.168679   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:37:48.181541   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:37:48.193155   46387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:37:48.312374   46387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:37:48.420624   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:37:48.432803   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:37:48.449232   46387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0115 10:37:48.449292   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.458042   46387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:37:48.458109   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.466909   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.475511   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.484081   46387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:37:48.493186   46387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:37:48.502460   46387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:37:48.502507   46387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:37:48.514913   46387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:37:48.522816   46387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:37:48.630774   46387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:37:48.807089   46387 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:37:48.807170   46387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:37:48.812950   46387 start.go:543] Will wait 60s for crictl version
	I0115 10:37:48.813005   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:48.816919   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:37:48.860058   46387 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:37:48.860143   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.916839   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.968312   46387 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0115 10:37:48.969913   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:48.972776   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973219   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:48.973249   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973519   46387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0115 10:37:48.977593   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:48.990551   46387 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 10:37:48.990613   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:49.030917   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:49.030973   46387 ssh_runner.go:195] Run: which lz4
	I0115 10:37:49.035059   46387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:37:49.039231   46387 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:37:49.039262   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0115 10:37:47.598904   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Start
	I0115 10:37:47.599102   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring networks are active...
	I0115 10:37:47.599886   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network default is active
	I0115 10:37:47.600258   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network mk-embed-certs-781270 is active
	I0115 10:37:47.600652   46584 main.go:141] libmachine: (embed-certs-781270) Getting domain xml...
	I0115 10:37:47.601365   46584 main.go:141] libmachine: (embed-certs-781270) Creating domain...
	I0115 10:37:48.842510   46584 main.go:141] libmachine: (embed-certs-781270) Waiting to get IP...
	I0115 10:37:48.843267   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:48.843637   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:48.843731   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:48.843603   47574 retry.go:31] will retry after 262.69562ms: waiting for machine to come up
	I0115 10:37:49.108361   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.108861   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.108901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.108796   47574 retry.go:31] will retry after 379.820541ms: waiting for machine to come up
	I0115 10:37:49.490343   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.490939   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.490979   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.490898   47574 retry.go:31] will retry after 463.282743ms: waiting for machine to come up
	I0115 10:37:49.956222   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.956694   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.956725   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.956646   47574 retry.go:31] will retry after 539.780461ms: waiting for machine to come up
	I0115 10:37:50.498391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:50.498901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:50.498935   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:50.498849   47574 retry.go:31] will retry after 611.580301ms: waiting for machine to come up
	I0115 10:37:51.111752   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.112228   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.112263   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.112194   47574 retry.go:31] will retry after 837.335782ms: waiting for machine to come up
	I0115 10:37:50.824399   46387 crio.go:444] Took 1.789376 seconds to copy over tarball
	I0115 10:37:50.824466   46387 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:37:53.837707   46387 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013210203s)
	I0115 10:37:53.837742   46387 crio.go:451] Took 3.013322 seconds to extract the tarball
	I0115 10:37:53.837753   46387 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:37:53.876939   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:53.922125   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:53.922161   46387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:37:53.922213   46387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:53.922249   46387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.922267   46387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.922300   46387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.922520   46387 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.922527   46387 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.922544   46387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.922547   46387 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.923794   46387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.923809   46387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.923811   46387 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.923807   46387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.923785   46387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.923843   46387 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.083650   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0115 10:37:54.090328   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.095213   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.123642   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.124012   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.139399   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.139406   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.207117   46387 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0115 10:37:54.207170   46387 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0115 10:37:54.207168   46387 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0115 10:37:54.207202   46387 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.207230   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.207248   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.248774   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.269586   46387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0115 10:37:54.269636   46387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.269661   46387 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0115 10:37:54.269693   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.269693   46387 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.269785   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404758   46387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0115 10:37:54.404862   46387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0115 10:37:54.404907   46387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.404969   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0115 10:37:54.404996   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404873   46387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.405034   46387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0115 10:37:54.405064   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404975   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.405082   46387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.405174   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.405202   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.405149   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.502357   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.502402   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0115 10:37:54.502507   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0115 10:37:54.502547   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.502504   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.502620   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0115 10:37:54.510689   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0115 10:37:54.577797   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0115 10:37:54.577854   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0115 10:37:54.577885   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0115 10:37:54.577945   46387 cache_images.go:92] LoadImages completed in 655.770059ms
	W0115 10:37:54.578019   46387 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0115 10:37:54.578091   46387 ssh_runner.go:195] Run: crio config
	I0115 10:37:51.950759   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.951289   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.951322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.951237   47574 retry.go:31] will retry after 817.063291ms: waiting for machine to come up
	I0115 10:37:52.770506   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:52.771015   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:52.771043   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:52.770977   47574 retry.go:31] will retry after 1.000852987s: waiting for machine to come up
	I0115 10:37:53.774011   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:53.774478   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:53.774518   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:53.774452   47574 retry.go:31] will retry after 1.171113667s: waiting for machine to come up
	I0115 10:37:54.947562   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:54.947925   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:54.947951   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:54.947887   47574 retry.go:31] will retry after 1.982035367s: waiting for machine to come up
	I0115 10:37:54.646104   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:37:54.750728   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:37:54.750754   46387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:37:54.750779   46387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-206509 NodeName:old-k8s-version-206509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 10:37:54.750935   46387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-206509"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-206509
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:37:54.751014   46387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-206509 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:37:54.751063   46387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0115 10:37:54.761568   46387 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:37:54.761645   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:37:54.771892   46387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0115 10:37:54.788678   46387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:37:54.804170   46387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0115 10:37:54.820285   46387 ssh_runner.go:195] Run: grep 192.168.61.70	control-plane.minikube.internal$ /etc/hosts
	I0115 10:37:54.823831   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:54.834806   46387 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509 for IP: 192.168.61.70
	I0115 10:37:54.834838   46387 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:37:54.835023   46387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:37:54.835070   46387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:37:54.835136   46387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/client.key
	I0115 10:37:54.835190   46387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key.99472042
	I0115 10:37:54.835249   46387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key
	I0115 10:37:54.835356   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:37:54.835392   46387 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:37:54.835401   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:37:54.835439   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:37:54.835467   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:37:54.835491   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:37:54.835531   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:54.836204   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:37:54.859160   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 10:37:54.884674   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:37:54.907573   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:37:54.930846   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:37:54.953329   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:37:54.975335   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:37:54.997505   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:37:55.020494   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:37:55.042745   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:37:55.064085   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:37:55.085243   46387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:37:55.101189   46387 ssh_runner.go:195] Run: openssl version
	I0115 10:37:55.106849   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:37:55.118631   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123477   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123545   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.129290   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:37:55.141464   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:37:55.153514   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157901   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157967   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.163557   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:37:55.173419   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:37:55.184850   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189454   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189508   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.194731   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:37:55.205634   46387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:37:55.209881   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:37:55.215521   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:37:55.221031   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:37:55.226730   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:37:55.232566   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:37:55.238251   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:37:55.244098   46387 kubeadm.go:404] StartCluster: {Name:old-k8s-version-206509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:37:55.244188   46387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:37:55.244243   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:37:55.293223   46387 cri.go:89] found id: ""
	I0115 10:37:55.293296   46387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:37:55.305374   46387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:37:55.305403   46387 kubeadm.go:636] restartCluster start
	I0115 10:37:55.305477   46387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:37:55.314925   46387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.316564   46387 kubeconfig.go:92] found "old-k8s-version-206509" server: "https://192.168.61.70:8443"
	I0115 10:37:55.319961   46387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:37:55.329062   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.329148   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.340866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.829433   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.829549   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.843797   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.329336   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.329436   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.343947   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.829507   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.829623   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.843692   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.329438   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.329522   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.341416   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.830063   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.830153   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.844137   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.329648   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.329743   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.342211   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.829792   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.829891   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.842397   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:59.330122   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.330202   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.346667   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.931004   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:56.931428   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:56.931461   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:56.931364   47574 retry.go:31] will retry after 2.358737657s: waiting for machine to come up
	I0115 10:37:59.292322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:59.292784   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:59.292817   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:59.292726   47574 retry.go:31] will retry after 2.808616591s: waiting for machine to come up
	I0115 10:37:59.829162   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.829242   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.844148   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.329799   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.329901   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.345118   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.829706   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.829806   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.845105   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.329598   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.329678   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.341872   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.829350   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.829424   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.843987   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.329874   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.329944   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.342152   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.829617   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.829711   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.841636   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.329206   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.329306   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.341373   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.829987   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.830080   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.842151   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:04.329957   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.330047   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.342133   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.103667   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:02.104098   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:02.104127   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:02.104058   47574 retry.go:31] will retry after 2.823867183s: waiting for machine to come up
	I0115 10:38:04.931219   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:04.931550   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:04.931594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:04.931523   47574 retry.go:31] will retry after 4.042933854s: waiting for machine to come up
	I0115 10:38:04.829477   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.829599   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.841546   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.329351   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:05.329417   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:05.341866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.341892   46387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:05.341900   46387 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:05.341910   46387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:05.342037   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:05.376142   46387 cri.go:89] found id: ""
	I0115 10:38:05.376206   46387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:05.391778   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:05.402262   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:05.402331   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411457   46387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411489   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:05.526442   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.239898   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.449098   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.515862   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.598545   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:06.598653   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.099595   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.599677   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.099492   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.599629   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.627737   46387 api_server.go:72] duration metric: took 2.029196375s to wait for apiserver process to appear ...
	I0115 10:38:08.627766   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:08.627803   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.199201   47063 start.go:369] acquired machines lock for "default-k8s-diff-port-709012" in 3m10.23481312s
	I0115 10:38:10.199261   47063 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:10.199269   47063 fix.go:54] fixHost starting: 
	I0115 10:38:10.199630   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:10.199667   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:10.215225   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0115 10:38:10.215627   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:10.216040   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:10.216068   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:10.216372   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:10.216583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:10.216829   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:10.218454   47063 fix.go:102] recreateIfNeeded on default-k8s-diff-port-709012: state=Stopped err=<nil>
	I0115 10:38:10.218482   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	W0115 10:38:10.218676   47063 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:10.220860   47063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-709012" ...
	I0115 10:38:08.976035   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976545   46584 main.go:141] libmachine: (embed-certs-781270) Found IP for machine: 192.168.72.222
	I0115 10:38:08.976574   46584 main.go:141] libmachine: (embed-certs-781270) Reserving static IP address...
	I0115 10:38:08.976592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has current primary IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976946   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.976980   46584 main.go:141] libmachine: (embed-certs-781270) DBG | skip adding static IP to network mk-embed-certs-781270 - found existing host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"}
	I0115 10:38:08.976997   46584 main.go:141] libmachine: (embed-certs-781270) Reserved static IP address: 192.168.72.222
	I0115 10:38:08.977017   46584 main.go:141] libmachine: (embed-certs-781270) Waiting for SSH to be available...
	I0115 10:38:08.977033   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Getting to WaitForSSH function...
	I0115 10:38:08.979155   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979456   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.979483   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979609   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH client type: external
	I0115 10:38:08.979658   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa (-rw-------)
	I0115 10:38:08.979699   46584 main.go:141] libmachine: (embed-certs-781270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:08.979718   46584 main.go:141] libmachine: (embed-certs-781270) DBG | About to run SSH command:
	I0115 10:38:08.979734   46584 main.go:141] libmachine: (embed-certs-781270) DBG | exit 0
	I0115 10:38:09.082171   46584 main.go:141] libmachine: (embed-certs-781270) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:09.082546   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetConfigRaw
	I0115 10:38:09.083235   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.085481   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.085845   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.085873   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.086115   46584 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/config.json ...
	I0115 10:38:09.086309   46584 machine.go:88] provisioning docker machine ...
	I0115 10:38:09.086331   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.086549   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086714   46584 buildroot.go:166] provisioning hostname "embed-certs-781270"
	I0115 10:38:09.086736   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086884   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.089346   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089702   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.089727   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.090035   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090180   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090319   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.090464   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.090845   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.090862   46584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-781270 && echo "embed-certs-781270" | sudo tee /etc/hostname
	I0115 10:38:09.240609   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-781270
	
	I0115 10:38:09.240643   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.243233   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243586   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.243616   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243764   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.243976   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244292   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.244453   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.244774   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.244800   46584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-781270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-781270/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-781270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:09.388902   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:09.388932   46584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:09.388968   46584 buildroot.go:174] setting up certificates
	I0115 10:38:09.388981   46584 provision.go:83] configureAuth start
	I0115 10:38:09.388998   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.389254   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.392236   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392603   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.392643   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392750   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.395249   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395596   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.395629   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395797   46584 provision.go:138] copyHostCerts
	I0115 10:38:09.395858   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:09.395872   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:09.395939   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:09.396037   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:09.396045   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:09.396067   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:09.396134   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:09.396141   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:09.396159   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:09.396212   46584 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.embed-certs-781270 san=[192.168.72.222 192.168.72.222 localhost 127.0.0.1 minikube embed-certs-781270]
	I0115 10:38:09.457000   46584 provision.go:172] copyRemoteCerts
	I0115 10:38:09.457059   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:09.457081   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.459709   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460074   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.460102   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460356   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.460522   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.460681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.460798   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:09.556211   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:09.578947   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:09.601191   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:09.623814   46584 provision.go:86] duration metric: configureAuth took 234.815643ms
	I0115 10:38:09.623844   46584 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:09.624070   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:09.624157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.626592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.626930   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.626972   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.627141   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.627326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627492   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627607   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.627755   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.628058   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.628086   46584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:09.931727   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:09.931765   46584 machine.go:91] provisioned docker machine in 845.442044ms
	I0115 10:38:09.931777   46584 start.go:300] post-start starting for "embed-certs-781270" (driver="kvm2")
	I0115 10:38:09.931790   46584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:09.931810   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.932100   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:09.932130   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.934487   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934811   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.934836   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934999   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.935160   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.935313   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.935480   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.028971   46584 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:10.032848   46584 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:10.032871   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:10.032955   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:10.033045   46584 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:10.033162   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:10.042133   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:10.064619   46584 start.go:303] post-start completed in 132.827155ms
	I0115 10:38:10.064658   46584 fix.go:56] fixHost completed within 22.492708172s
	I0115 10:38:10.064681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.067323   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067651   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.067675   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067812   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.068037   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068272   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068449   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.068587   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:10.068904   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:10.068919   46584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:10.199025   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315090.148648598
	
	I0115 10:38:10.199045   46584 fix.go:206] guest clock: 1705315090.148648598
	I0115 10:38:10.199053   46584 fix.go:219] Guest: 2024-01-15 10:38:10.148648598 +0000 UTC Remote: 2024-01-15 10:38:10.064662616 +0000 UTC m=+303.401739583 (delta=83.985982ms)
	I0115 10:38:10.199088   46584 fix.go:190] guest clock delta is within tolerance: 83.985982ms
	I0115 10:38:10.199096   46584 start.go:83] releasing machines lock for "embed-certs-781270", held for 22.627192785s
	I0115 10:38:10.199122   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.199368   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:10.201962   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202349   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.202389   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202603   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203417   46584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:10.203461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.203546   46584 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:10.203570   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.206022   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206257   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206371   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206400   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.206673   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206700   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206768   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.206910   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.206911   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.207087   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.207191   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.207335   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.207465   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.327677   46584 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:10.333127   46584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:10.473183   46584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:10.480054   46584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:10.480115   46584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:10.494367   46584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:10.494388   46584 start.go:475] detecting cgroup driver to use...
	I0115 10:38:10.494463   46584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:10.508327   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:10.519950   46584 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:10.520003   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:10.531743   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:10.544980   46584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:10.650002   46584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:10.767145   46584 docker.go:233] disabling docker service ...
	I0115 10:38:10.767214   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:10.782073   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:10.796419   46584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:10.913422   46584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:11.016113   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:11.032638   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:11.053360   46584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:11.053415   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.064008   46584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:11.064067   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.074353   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.084486   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.093962   46584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:11.105487   46584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:11.117411   46584 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:11.117469   46584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:11.133780   46584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:11.145607   46584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:11.257012   46584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:11.437979   46584 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:11.438050   46584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:11.445814   46584 start.go:543] Will wait 60s for crictl version
	I0115 10:38:11.445896   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:38:11.449770   46584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:11.491895   46584 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:11.491985   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.543656   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.609733   46584 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:11.611238   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:11.614594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.614947   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:11.614988   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.615225   46584 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:11.619516   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:11.635101   46584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:11.635170   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:11.675417   46584 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:11.675504   46584 ssh_runner.go:195] Run: which lz4
	I0115 10:38:11.679733   46584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:11.683858   46584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:11.683889   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:13.628977   46387 api_server.go:269] stopped: https://192.168.61.70:8443/healthz: Get "https://192.168.61.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0115 10:38:13.629022   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.222501   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Start
	I0115 10:38:10.222694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring networks are active...
	I0115 10:38:10.223335   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network default is active
	I0115 10:38:10.225164   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network mk-default-k8s-diff-port-709012 is active
	I0115 10:38:10.225189   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Getting domain xml...
	I0115 10:38:10.225201   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Creating domain...
	I0115 10:38:11.529205   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting to get IP...
	I0115 10:38:11.530265   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530808   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530886   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.530786   47689 retry.go:31] will retry after 220.836003ms: waiting for machine to come up
	I0115 10:38:11.753500   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754152   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.754119   47689 retry.go:31] will retry after 288.710195ms: waiting for machine to come up
	I0115 10:38:12.044613   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045149   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045179   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.045065   47689 retry.go:31] will retry after 321.962888ms: waiting for machine to come up
	I0115 10:38:12.368694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369119   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369171   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.369075   47689 retry.go:31] will retry after 457.128837ms: waiting for machine to come up
	I0115 10:38:12.827574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828079   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828108   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.828011   47689 retry.go:31] will retry after 524.042929ms: waiting for machine to come up
	I0115 10:38:13.353733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354288   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354315   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:13.354237   47689 retry.go:31] will retry after 885.937378ms: waiting for machine to come up
	I0115 10:38:14.241653   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242258   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:14.242185   47689 retry.go:31] will retry after 1.168061338s: waiting for machine to come up
	I0115 10:38:14.984346   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:14.984377   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:14.984395   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.129596   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:15.129627   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:15.129650   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.224825   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.224852   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:15.628377   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.666573   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.666642   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:16.128080   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:16.148642   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:38:16.156904   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:38:16.156927   46387 api_server.go:131] duration metric: took 7.529154555s to wait for apiserver health ...
	I0115 10:38:16.156936   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:38:16.156942   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:16.159248   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:13.665699   46584 crio.go:444] Took 1.986003 seconds to copy over tarball
	I0115 10:38:13.665769   46584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:16.702911   46584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.037102789s)
	I0115 10:38:16.702954   46584 crio.go:451] Took 3.037230 seconds to extract the tarball
	I0115 10:38:16.702966   46584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:16.160810   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:16.173072   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:16.205009   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:16.216599   46387 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:16.216637   46387 system_pods.go:61] "coredns-5644d7b6d9-5qcrz" [3fc31c2b-9c3f-4167-8b3f-bbe262591a90] Running
	I0115 10:38:16.216645   46387 system_pods.go:61] "coredns-5644d7b6d9-rgrbc" [1c2c2a33-f329-4cb3-8e05-900a252ceed3] Running
	I0115 10:38:16.216651   46387 system_pods.go:61] "etcd-old-k8s-version-206509" [8c2919cc-4b82-4387-be0d-f3decf4b324b] Running
	I0115 10:38:16.216658   46387 system_pods.go:61] "kube-apiserver-old-k8s-version-206509" [51e63cf2-5728-471d-b447-3f3aa9454ac7] Running
	I0115 10:38:16.216663   46387 system_pods.go:61] "kube-controller-manager-old-k8s-version-206509" [6dec6bf0-ce5d-4f87-8bf7-c774214eb8ea] Running
	I0115 10:38:16.216668   46387 system_pods.go:61] "kube-proxy-w9fdn" [42b28054-8876-4854-a041-62be5688c1c2] Running
	I0115 10:38:16.216675   46387 system_pods.go:61] "kube-scheduler-old-k8s-version-206509" [7a50352c-2129-4de4-84e8-3cb5d8ccd463] Running
	I0115 10:38:16.216681   46387 system_pods.go:61] "storage-provisioner" [f341413b-8261-4a78-9f28-449be173cf19] Running
	I0115 10:38:16.216690   46387 system_pods.go:74] duration metric: took 11.655731ms to wait for pod list to return data ...
	I0115 10:38:16.216703   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:16.220923   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:16.220962   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:16.220978   46387 node_conditions.go:105] duration metric: took 4.267954ms to run NodePressure ...
	I0115 10:38:16.221005   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:16.519042   46387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:16.523772   46387 retry.go:31] will retry after 264.775555ms: kubelet not initialised
	I0115 10:38:17.172203   46387 retry.go:31] will retry after 553.077445ms: kubelet not initialised
	I0115 10:38:18.053202   46387 retry.go:31] will retry after 653.279352ms: kubelet not initialised
	I0115 10:38:18.837753   46387 retry.go:31] will retry after 692.673954ms: kubelet not initialised
	I0115 10:38:19.596427   46387 retry.go:31] will retry after 679.581071ms: kubelet not initialised
	I0115 10:38:15.412204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412706   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412766   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:15.412670   47689 retry.go:31] will retry after 895.041379ms: waiting for machine to come up
	I0115 10:38:16.309188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309764   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:16.309692   47689 retry.go:31] will retry after 1.593821509s: waiting for machine to come up
	I0115 10:38:17.904625   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905131   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905168   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:17.905073   47689 retry.go:31] will retry after 2.002505122s: waiting for machine to come up
	I0115 10:38:16.745093   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:17.184204   46584 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:17.184235   46584 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:17.184325   46584 ssh_runner.go:195] Run: crio config
	I0115 10:38:17.249723   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:17.249748   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:17.249764   46584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:17.249782   46584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.222 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-781270 NodeName:embed-certs-781270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:17.249936   46584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-781270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:17.250027   46584 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-781270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:38:17.250091   46584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:17.262237   46584 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:17.262313   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:17.273370   46584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0115 10:38:17.292789   46584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:17.312254   46584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0115 10:38:17.332121   46584 ssh_runner.go:195] Run: grep 192.168.72.222	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:17.336199   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:17.349009   46584 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270 for IP: 192.168.72.222
	I0115 10:38:17.349047   46584 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:17.349200   46584 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:17.349246   46584 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:17.349316   46584 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/client.key
	I0115 10:38:17.685781   46584 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key.4e007618
	I0115 10:38:17.685874   46584 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key
	I0115 10:38:17.685990   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:17.686022   46584 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:17.686033   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:17.686054   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:17.686085   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:17.686107   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:17.686147   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:17.686866   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:17.713652   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:17.744128   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:17.771998   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:17.796880   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:17.822291   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:17.848429   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:17.874193   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:17.898873   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:17.922742   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:17.945123   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:17.967188   46584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:17.983237   46584 ssh_runner.go:195] Run: openssl version
	I0115 10:38:17.988658   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:17.998141   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002462   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002521   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.008136   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:18.017766   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:18.027687   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032418   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032479   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.038349   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:18.048395   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:18.058675   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063369   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063441   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.068886   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:18.078459   46584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:18.083181   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:18.089264   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:18.095399   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:18.101292   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:18.107113   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:18.112791   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:18.118337   46584 kubeadm.go:404] StartCluster: {Name:embed-certs-781270 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:18.118561   46584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:18.118611   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:18.162363   46584 cri.go:89] found id: ""
	I0115 10:38:18.162454   46584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:18.172261   46584 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:18.172286   46584 kubeadm.go:636] restartCluster start
	I0115 10:38:18.172357   46584 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:18.181043   46584 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.182845   46584 kubeconfig.go:92] found "embed-certs-781270" server: "https://192.168.72.222:8443"
	I0115 10:38:18.186506   46584 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:18.194997   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.195069   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.205576   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.695105   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.695200   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.709836   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.195362   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.195533   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.210585   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.695088   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.695201   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.710436   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.196063   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.196145   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.211948   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.695433   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.695545   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.710981   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.195510   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.195588   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.206769   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.695111   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.695192   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.706765   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.288898   46387 retry.go:31] will retry after 1.97886626s: kubelet not initialised
	I0115 10:38:22.273756   46387 retry.go:31] will retry after 2.35083465s: kubelet not initialised
	I0115 10:38:19.909015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909598   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:19.909539   47689 retry.go:31] will retry after 2.883430325s: waiting for machine to come up
	I0115 10:38:22.794280   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794702   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794729   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:22.794660   47689 retry.go:31] will retry after 3.219865103s: waiting for machine to come up
	I0115 10:38:22.195343   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.195454   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.210740   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:22.695835   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.695900   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.710247   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.195555   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.195633   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.207117   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.695569   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.695632   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.706867   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.195323   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.195428   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.207679   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.695971   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.696049   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.708342   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.195900   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.195994   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.207896   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.695417   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.695490   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.706180   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.195799   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.195890   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.206859   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.695558   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.695648   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.706652   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.630486   46387 retry.go:31] will retry after 5.638904534s: kubelet not initialised
	I0115 10:38:26.016121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016496   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016520   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:26.016463   47689 retry.go:31] will retry after 3.426285557s: waiting for machine to come up
	I0115 10:38:29.447165   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447643   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Found IP for machine: 192.168.39.125
	I0115 10:38:29.447678   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has current primary IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447719   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserving static IP address...
	I0115 10:38:29.448146   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.448172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | skip adding static IP to network mk-default-k8s-diff-port-709012 - found existing host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"}
	I0115 10:38:29.448183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserved static IP address: 192.168.39.125
	I0115 10:38:29.448204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for SSH to be available...
	I0115 10:38:29.448215   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Getting to WaitForSSH function...
	I0115 10:38:29.450376   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450690   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.450715   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH client type: external
	I0115 10:38:29.450867   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa (-rw-------)
	I0115 10:38:29.450899   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:29.450909   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | About to run SSH command:
	I0115 10:38:29.450919   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | exit 0
	I0115 10:38:29.550560   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:29.550940   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetConfigRaw
	I0115 10:38:29.551686   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.554629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555085   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.555117   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555426   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:38:29.555642   47063 machine.go:88] provisioning docker machine ...
	I0115 10:38:29.555672   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:29.555875   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556053   47063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-709012"
	I0115 10:38:29.556076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556217   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.558493   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.558804   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.558835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.559018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.559209   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559363   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.559677   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.560009   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.560028   47063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-709012 && echo "default-k8s-diff-port-709012" | sudo tee /etc/hostname
	I0115 10:38:29.706028   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-709012
	
	I0115 10:38:29.706059   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.708893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.709343   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709409   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.709631   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709789   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709938   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.710121   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.710473   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.710501   47063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-709012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-709012/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-709012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:29.845884   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:29.845916   47063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:29.845938   47063 buildroot.go:174] setting up certificates
	I0115 10:38:29.845953   47063 provision.go:83] configureAuth start
	I0115 10:38:29.845973   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.846293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.849072   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.849558   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849755   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.852196   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852548   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.852574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852664   47063 provision.go:138] copyHostCerts
	I0115 10:38:29.852716   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:29.852726   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:29.852778   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:29.852870   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:29.852877   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:29.852896   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:29.852957   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:29.852964   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:29.852981   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:29.853031   47063 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-709012 san=[192.168.39.125 192.168.39.125 localhost 127.0.0.1 minikube default-k8s-diff-port-709012]
	I0115 10:38:30.777181   46388 start.go:369] acquired machines lock for "no-preload-824502" in 58.676870352s
	I0115 10:38:30.777252   46388 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:30.777263   46388 fix.go:54] fixHost starting: 
	I0115 10:38:30.777697   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:30.777733   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:30.795556   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0115 10:38:30.795931   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:30.796387   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:38:30.796417   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:30.796825   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:30.797001   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:30.797164   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:38:30.798953   46388 fix.go:102] recreateIfNeeded on no-preload-824502: state=Stopped err=<nil>
	I0115 10:38:30.798978   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	W0115 10:38:30.799146   46388 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:30.800981   46388 out.go:177] * Restarting existing kvm2 VM for "no-preload-824502" ...
	I0115 10:38:27.195033   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.195128   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.205968   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:27.695992   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.696075   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.707112   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.195726   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:28.195798   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:28.206794   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.206837   46584 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:28.206846   46584 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:28.206858   46584 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:28.206917   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:28.256399   46584 cri.go:89] found id: ""
	I0115 10:38:28.256468   46584 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:28.272234   46584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:28.281359   46584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:28.281439   46584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290385   46584 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290431   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:28.417681   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.012673   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.212322   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.296161   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.378870   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:29.378965   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.879587   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.379077   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.879281   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:31.379626   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.951966   47063 provision.go:172] copyRemoteCerts
	I0115 10:38:29.952019   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:29.952040   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.954784   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955082   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.955104   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955285   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.955466   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.955649   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.955793   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.057077   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:30.081541   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0115 10:38:30.109962   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:30.140809   47063 provision.go:86] duration metric: configureAuth took 294.836045ms
	I0115 10:38:30.140840   47063 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:30.141071   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:30.141167   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.144633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.144975   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.145015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.145177   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.145378   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145539   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145703   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.145927   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.146287   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.146310   47063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:30.484993   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:30.485022   47063 machine.go:91] provisioned docker machine in 929.358403ms
	I0115 10:38:30.485035   47063 start.go:300] post-start starting for "default-k8s-diff-port-709012" (driver="kvm2")
	I0115 10:38:30.485049   47063 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:30.485067   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.485390   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:30.485431   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.488115   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488473   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.488512   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.488837   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.489018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.489171   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.590174   47063 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:30.594879   47063 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:30.594907   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:30.594974   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:30.595069   47063 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:30.595183   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:30.604525   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:30.631240   47063 start.go:303] post-start completed in 146.190685ms
	I0115 10:38:30.631270   47063 fix.go:56] fixHost completed within 20.431996373s
	I0115 10:38:30.631293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.634188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634544   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.634577   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634807   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.635014   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635185   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.635574   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.636012   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.636032   47063 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:30.777043   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315110.724251584
	
	I0115 10:38:30.777069   47063 fix.go:206] guest clock: 1705315110.724251584
	I0115 10:38:30.777079   47063 fix.go:219] Guest: 2024-01-15 10:38:30.724251584 +0000 UTC Remote: 2024-01-15 10:38:30.631274763 +0000 UTC m=+210.817197544 (delta=92.976821ms)
	I0115 10:38:30.777107   47063 fix.go:190] guest clock delta is within tolerance: 92.976821ms
	I0115 10:38:30.777114   47063 start.go:83] releasing machines lock for "default-k8s-diff-port-709012", held for 20.577876265s
	I0115 10:38:30.777143   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.777406   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:30.780611   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781041   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.781076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781250   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.781876   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782186   47063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:30.782240   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.782295   47063 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:30.782321   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.785597   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786228   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.786255   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786386   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786698   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.786881   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.787023   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.787078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.787095   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.787204   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.787774   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.787930   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.788121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.788345   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.919659   47063 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:30.926237   47063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:31.076313   47063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:31.085010   47063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:31.085087   47063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:31.104237   47063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:31.104265   47063 start.go:475] detecting cgroup driver to use...
	I0115 10:38:31.104331   47063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:31.124044   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:31.139494   47063 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:31.139581   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:31.154894   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:31.172458   47063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:31.307400   47063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:31.496675   47063 docker.go:233] disabling docker service ...
	I0115 10:38:31.496733   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:31.513632   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:31.526228   47063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:31.681556   47063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:31.816489   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:31.831193   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:31.853530   47063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:31.853602   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.864559   47063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:31.864661   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.875384   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.888460   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.904536   47063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:31.915622   47063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:31.929209   47063 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:31.929266   47063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:31.948691   47063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:31.959872   47063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:32.102988   47063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:32.300557   47063 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:32.300632   47063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:32.305636   47063 start.go:543] Will wait 60s for crictl version
	I0115 10:38:32.305691   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:38:32.309883   47063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:32.354459   47063 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:32.354594   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.402443   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.463150   47063 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:30.802324   46388 main.go:141] libmachine: (no-preload-824502) Calling .Start
	I0115 10:38:30.802525   46388 main.go:141] libmachine: (no-preload-824502) Ensuring networks are active...
	I0115 10:38:30.803127   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network default is active
	I0115 10:38:30.803476   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network mk-no-preload-824502 is active
	I0115 10:38:30.803799   46388 main.go:141] libmachine: (no-preload-824502) Getting domain xml...
	I0115 10:38:30.804452   46388 main.go:141] libmachine: (no-preload-824502) Creating domain...
	I0115 10:38:32.173614   46388 main.go:141] libmachine: (no-preload-824502) Waiting to get IP...
	I0115 10:38:32.174650   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.175113   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.175211   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.175106   47808 retry.go:31] will retry after 275.127374ms: waiting for machine to come up
	I0115 10:38:32.451595   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.452150   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.452183   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.452095   47808 retry.go:31] will retry after 258.80121ms: waiting for machine to come up
	I0115 10:38:32.712701   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.713348   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.713531   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.713459   47808 retry.go:31] will retry after 440.227123ms: waiting for machine to come up
	I0115 10:38:33.155845   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.156595   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.156625   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.156500   47808 retry.go:31] will retry after 428.795384ms: waiting for machine to come up
	I0115 10:38:33.587781   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.588169   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.588190   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.588118   47808 retry.go:31] will retry after 720.536787ms: waiting for machine to come up
	I0115 10:38:34.310098   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:34.310640   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:34.310674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:34.310604   47808 retry.go:31] will retry after 841.490959ms: waiting for machine to come up
	I0115 10:38:30.274782   46387 retry.go:31] will retry after 7.853808987s: kubelet not initialised
	I0115 10:38:32.464592   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:32.467583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.467962   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:32.467993   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.468218   47063 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:32.472463   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:32.488399   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:32.488488   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:32.535645   47063 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:32.535776   47063 ssh_runner.go:195] Run: which lz4
	I0115 10:38:32.541468   47063 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:32.547264   47063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:32.547297   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:34.427435   47063 crio.go:444] Took 1.886019 seconds to copy over tarball
	I0115 10:38:34.427510   47063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:31.879639   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.379656   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.408694   46584 api_server.go:72] duration metric: took 3.029823539s to wait for apiserver process to appear ...
	I0115 10:38:32.408737   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:32.408760   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.614020   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:36.614053   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:36.614068   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.687561   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.687606   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.134400   46387 retry.go:31] will retry after 7.988567077s: kubelet not initialised
	I0115 10:38:35.154196   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:35.154644   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:35.154674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:35.154615   47808 retry.go:31] will retry after 1.099346274s: waiting for machine to come up
	I0115 10:38:36.255575   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:36.256111   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:36.256151   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:36.256038   47808 retry.go:31] will retry after 1.294045748s: waiting for machine to come up
	I0115 10:38:37.551734   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:37.552569   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:37.552593   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:37.552527   47808 retry.go:31] will retry after 1.720800907s: waiting for machine to come up
	I0115 10:38:39.275250   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:39.275651   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:39.275684   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:39.275595   47808 retry.go:31] will retry after 1.914509744s: waiting for machine to come up
	I0115 10:38:37.765711   47063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.338169875s)
	I0115 10:38:37.765741   47063 crio.go:451] Took 3.338279 seconds to extract the tarball
	I0115 10:38:37.765753   47063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:37.807016   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:37.858151   47063 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:37.858195   47063 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:37.858295   47063 ssh_runner.go:195] Run: crio config
	I0115 10:38:37.933830   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:37.933851   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:37.933872   47063 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:37.933896   47063 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-709012 NodeName:default-k8s-diff-port-709012 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:37.934040   47063 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-709012"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:37.934132   47063 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-709012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0115 10:38:37.934202   47063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:37.945646   47063 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:37.945728   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:37.957049   47063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0115 10:38:37.978770   47063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:37.995277   47063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0115 10:38:38.012964   47063 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:38.016803   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:38.028708   47063 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012 for IP: 192.168.39.125
	I0115 10:38:38.028740   47063 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:38.028887   47063 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:38.028926   47063 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:38.028988   47063 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/client.key
	I0115 10:38:38.048801   47063 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key.657bd91f
	I0115 10:38:38.048895   47063 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key
	I0115 10:38:38.049019   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:38.049058   47063 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:38.049075   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:38.049110   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:38.049149   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:38.049183   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:38.049241   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:38.049848   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:38.078730   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:38.102069   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:38.124278   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:38.150354   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:38.173703   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:38.201758   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:38.227016   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:38.249876   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:38.271859   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:38.294051   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:38.316673   47063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:38.335128   47063 ssh_runner.go:195] Run: openssl version
	I0115 10:38:38.342574   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:38.355889   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361805   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361871   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.369192   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:38.381493   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:38.391714   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396728   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396787   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.402624   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:38.413957   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:38.425258   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430627   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430697   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.440362   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:38.453323   47063 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:38.458803   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:38.465301   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:38.471897   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:38.478274   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:38.484890   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:38.490909   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:38.496868   47063 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:38.496966   47063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:38.497015   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:38.539389   47063 cri.go:89] found id: ""
	I0115 10:38:38.539475   47063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:38.550998   47063 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:38.551020   47063 kubeadm.go:636] restartCluster start
	I0115 10:38:38.551076   47063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:38.561885   47063 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:38.563439   47063 kubeconfig.go:92] found "default-k8s-diff-port-709012" server: "https://192.168.39.125:8444"
	I0115 10:38:38.566482   47063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:38.576458   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:38.576521   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:38.588702   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.077323   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.077407   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.089885   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.577363   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.577441   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.591111   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:36.909069   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.917556   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.917594   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.409134   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.417305   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.417348   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.909251   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.916788   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.916824   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.409535   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:38.416538   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:38.416572   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.908929   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.863238   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.863279   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.863294   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.869897   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.869922   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.909113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.065422   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:40.065467   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:40.408921   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.414320   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:38:40.424348   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:40.424378   46584 api_server.go:131] duration metric: took 8.015632919s to wait for apiserver health ...
	I0115 10:38:40.424390   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:40.424398   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:40.426615   46584 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:40.427979   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:40.450675   46584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:40.478174   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:40.492540   46584 system_pods.go:59] 9 kube-system pods found
	I0115 10:38:40.492582   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492593   46584 system_pods.go:61] "coredns-5dd5756b68-w4p2z" [87d362df-5c29-4a04-b44f-c502cf6849bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492609   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:40.492619   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:40.492633   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:40.492646   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:40.492658   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:40.492671   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:40.492687   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:40.492700   46584 system_pods.go:74] duration metric: took 14.502202ms to wait for pod list to return data ...
	I0115 10:38:40.492715   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:40.496471   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:40.496504   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:40.496517   46584 node_conditions.go:105] duration metric: took 3.794528ms to run NodePressure ...
	I0115 10:38:40.496538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:40.770732   46584 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777051   46584 kubeadm.go:787] kubelet initialised
	I0115 10:38:40.777118   46584 kubeadm.go:788] duration metric: took 6.307286ms waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777139   46584 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:40.784605   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.798293   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798365   46584 pod_ready.go:81] duration metric: took 13.654765ms waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.798389   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798402   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.807236   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807276   46584 pod_ready.go:81] duration metric: took 8.862426ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.807289   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807297   46584 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.813904   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813932   46584 pod_ready.go:81] duration metric: took 6.62492ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.813944   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813951   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.882407   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882458   46584 pod_ready.go:81] duration metric: took 68.496269ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.882472   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882485   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.282123   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282158   46584 pod_ready.go:81] duration metric: took 399.656962ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.282172   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282181   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.683979   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684007   46584 pod_ready.go:81] duration metric: took 401.816493ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.684017   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684023   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.082465   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082490   46584 pod_ready.go:81] duration metric: took 398.460424ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.082501   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082509   46584 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.484454   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484490   46584 pod_ready.go:81] duration metric: took 401.970108ms waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.484504   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484513   46584 pod_ready.go:38] duration metric: took 1.707353329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:42.484534   46584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:42.499693   46584 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:42.499715   46584 kubeadm.go:640] restartCluster took 24.327423485s
	I0115 10:38:42.499733   46584 kubeadm.go:406] StartCluster complete in 24.381392225s
	I0115 10:38:42.499752   46584 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.499817   46584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:42.502965   46584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.503219   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:42.503253   46584 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:42.503356   46584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-781270"
	I0115 10:38:42.503374   46584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-781270"
	I0115 10:38:42.503383   46584 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-781270"
	I0115 10:38:42.503395   46584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-781270"
	W0115 10:38:42.503402   46584 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:42.503451   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:42.503493   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503504   46584 addons.go:69] Setting metrics-server=true in profile "embed-certs-781270"
	I0115 10:38:42.503520   46584 addons.go:234] Setting addon metrics-server=true in "embed-certs-781270"
	W0115 10:38:42.503533   46584 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:42.503577   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503826   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503850   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503855   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503871   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503895   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503924   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.522809   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0115 10:38:42.523025   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0115 10:38:42.523163   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0115 10:38:42.523260   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523382   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523755   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523861   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.523990   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524323   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524345   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524415   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524585   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524605   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524825   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524992   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525017   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525375   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525412   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525571   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.525747   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.528762   46584 addons.go:234] Setting addon default-storageclass=true in "embed-certs-781270"
	W0115 10:38:42.528781   46584 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:42.528807   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.529117   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.529140   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.544693   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I0115 10:38:42.545013   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0115 10:38:42.545528   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.545628   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.546235   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546265   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546268   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546280   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546650   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546687   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546820   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.546918   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I0115 10:38:42.547068   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.547459   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.548255   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.548269   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.548859   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.549002   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.549393   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.549415   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.549597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.551555   46584 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:42.552918   46584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:42.554551   46584 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.554573   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:42.554591   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.554552   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:42.554648   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:42.554662   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.561284   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.561706   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.561854   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.562023   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.562123   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.562179   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.562229   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.564058   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564432   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.564529   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564798   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.564977   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.565148   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.565282   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.570688   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0115 10:38:42.571242   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.571724   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.571749   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.571989   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.572135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.573685   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.573936   46584 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.573952   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:42.573969   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.576948   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577272   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.577312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577680   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.577866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.577988   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.578134   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.687267   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:42.687293   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:42.707058   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:42.707083   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:42.727026   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.745278   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.777425   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:42.777450   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:42.779528   46584 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:42.832109   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:43.011971   46584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-781270" context rescaled to 1 replicas
	I0115 10:38:43.012022   46584 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:43.014704   46584 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:43.016005   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:44.039814   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.294486297s)
	I0115 10:38:44.039891   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312831152s)
	I0115 10:38:44.039895   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039928   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039946   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040024   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040264   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040283   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040293   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040302   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040412   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040427   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040451   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040613   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040744   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040750   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040755   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040791   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040800   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054113   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.054134   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.054409   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.054454   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054469   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.151470   46584 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.135429651s)
	I0115 10:38:44.151517   46584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:44.151560   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319411531s)
	I0115 10:38:44.151601   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.151626   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.151954   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.151973   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152001   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.152012   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.152312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.152317   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.152328   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152338   46584 addons.go:470] Verifying addon metrics-server=true in "embed-certs-781270"
	I0115 10:38:44.155687   46584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:41.191855   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:41.192271   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:41.192310   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:41.192239   47808 retry.go:31] will retry after 2.364591434s: waiting for machine to come up
	I0115 10:38:43.560150   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:43.560624   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:43.560648   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:43.560581   47808 retry.go:31] will retry after 3.204170036s: waiting for machine to come up
	I0115 10:38:40.076788   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.076875   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.089217   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:40.577351   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.577448   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.593294   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.076625   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.076730   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.092700   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.577413   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.577513   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.592266   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.076755   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.076862   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.090411   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.576920   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.576982   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.589590   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.077312   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.077410   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.089732   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.576781   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.576857   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.592414   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.076854   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.076922   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.089009   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.576614   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.576713   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.592137   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.157450   46584 addons.go:505] enable addons completed in 1.654202196s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:38:46.156830   46584 node_ready.go:58] node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:46.129496   46387 retry.go:31] will retry after 7.881779007s: kubelet not initialised
	I0115 10:38:46.766674   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:46.767050   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:46.767072   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:46.767013   47808 retry.go:31] will retry after 3.09324278s: waiting for machine to come up
	I0115 10:38:45.076819   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.076882   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.092624   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:45.576654   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.576724   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.590306   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.076821   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.076920   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.090883   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.577506   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.577590   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.590379   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.076909   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.076997   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.088742   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.577287   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.577371   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.589014   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.076538   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.076608   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.088956   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.576474   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.576573   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.588122   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.588146   47063 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:48.588153   47063 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:48.588162   47063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:48.588214   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:48.631367   47063 cri.go:89] found id: ""
	I0115 10:38:48.631441   47063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:48.648653   47063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:48.657948   47063 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:48.658017   47063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668103   47063 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668124   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:48.787890   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.559039   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.767002   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.842165   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:47.155176   46584 node_ready.go:49] node "embed-certs-781270" has status "Ready":"True"
	I0115 10:38:47.155200   46584 node_ready.go:38] duration metric: took 3.003671558s waiting for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:47.155212   46584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:47.162248   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:49.169885   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:51.190513   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:49.864075   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864515   46388 main.go:141] libmachine: (no-preload-824502) Found IP for machine: 192.168.50.136
	I0115 10:38:49.864538   46388 main.go:141] libmachine: (no-preload-824502) Reserving static IP address...
	I0115 10:38:49.864554   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has current primary IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864990   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.865034   46388 main.go:141] libmachine: (no-preload-824502) DBG | skip adding static IP to network mk-no-preload-824502 - found existing host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"}
	I0115 10:38:49.865052   46388 main.go:141] libmachine: (no-preload-824502) Reserved static IP address: 192.168.50.136
	I0115 10:38:49.865073   46388 main.go:141] libmachine: (no-preload-824502) Waiting for SSH to be available...
	I0115 10:38:49.865115   46388 main.go:141] libmachine: (no-preload-824502) DBG | Getting to WaitForSSH function...
	I0115 10:38:49.867410   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867671   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.867708   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867864   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH client type: external
	I0115 10:38:49.867924   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa (-rw-------)
	I0115 10:38:49.867961   46388 main.go:141] libmachine: (no-preload-824502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:49.867983   46388 main.go:141] libmachine: (no-preload-824502) DBG | About to run SSH command:
	I0115 10:38:49.867994   46388 main.go:141] libmachine: (no-preload-824502) DBG | exit 0
	I0115 10:38:49.966638   46388 main.go:141] libmachine: (no-preload-824502) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:49.967072   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetConfigRaw
	I0115 10:38:49.967925   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:49.970409   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.970811   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.970846   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.971099   46388 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/config.json ...
	I0115 10:38:49.971300   46388 machine.go:88] provisioning docker machine ...
	I0115 10:38:49.971327   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:49.971561   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971757   46388 buildroot.go:166] provisioning hostname "no-preload-824502"
	I0115 10:38:49.971783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971970   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:49.974279   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974723   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.974758   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974917   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:49.975088   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975247   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975460   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:49.975640   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:49.976081   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:49.976099   46388 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-824502 && echo "no-preload-824502" | sudo tee /etc/hostname
	I0115 10:38:50.121181   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-824502
	
	I0115 10:38:50.121206   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.123821   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124162   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.124194   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124371   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.124588   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124788   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124940   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.125103   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.125410   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.125429   46388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-824502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-824502/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-824502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:50.259649   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:50.259680   46388 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:50.259710   46388 buildroot.go:174] setting up certificates
	I0115 10:38:50.259724   46388 provision.go:83] configureAuth start
	I0115 10:38:50.259736   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:50.260022   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:50.262296   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262683   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.262704   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262848   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.265340   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265715   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.265743   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265885   46388 provision.go:138] copyHostCerts
	I0115 10:38:50.265942   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:50.265953   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:50.266025   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:50.266128   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:50.266143   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:50.266178   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:50.266258   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:50.266268   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:50.266296   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:50.266359   46388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.no-preload-824502 san=[192.168.50.136 192.168.50.136 localhost 127.0.0.1 minikube no-preload-824502]
	I0115 10:38:50.666513   46388 provision.go:172] copyRemoteCerts
	I0115 10:38:50.666584   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:50.666615   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.669658   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670109   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.670162   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670410   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.670632   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.670812   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.671067   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:50.774944   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:50.799533   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:50.824210   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 10:38:50.849191   46388 provision.go:86] duration metric: configureAuth took 589.452836ms
	I0115 10:38:50.849224   46388 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:50.849455   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:38:50.849560   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.852884   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853291   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.853346   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853508   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.853746   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.853936   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.854105   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.854244   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.854708   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.854735   46388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:51.246971   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:51.246997   46388 machine.go:91] provisioned docker machine in 1.275679147s
	I0115 10:38:51.247026   46388 start.go:300] post-start starting for "no-preload-824502" (driver="kvm2")
	I0115 10:38:51.247043   46388 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:51.247063   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.247450   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:51.247481   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.250477   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250751   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.250783   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250951   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.251085   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.251227   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.251308   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.350552   46388 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:51.355893   46388 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:51.355918   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:51.355994   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:51.356096   46388 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:51.356220   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:51.366598   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:51.393765   46388 start.go:303] post-start completed in 146.702407ms
	I0115 10:38:51.393798   46388 fix.go:56] fixHost completed within 20.616533939s
	I0115 10:38:51.393826   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.396990   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397531   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.397563   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397785   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.398006   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398190   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398367   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.398602   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:51.399038   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:51.399057   46388 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:51.532940   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315131.477577825
	
	I0115 10:38:51.532962   46388 fix.go:206] guest clock: 1705315131.477577825
	I0115 10:38:51.532971   46388 fix.go:219] Guest: 2024-01-15 10:38:51.477577825 +0000 UTC Remote: 2024-01-15 10:38:51.393803771 +0000 UTC m=+361.851018624 (delta=83.774054ms)
	I0115 10:38:51.533006   46388 fix.go:190] guest clock delta is within tolerance: 83.774054ms
	I0115 10:38:51.533011   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 20.755785276s
	I0115 10:38:51.533031   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.533296   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:51.536532   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537167   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.537206   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537411   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538058   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538236   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538395   46388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:51.538461   46388 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:51.538485   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.538492   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.541387   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541419   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541791   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541836   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541878   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541952   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.541959   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.542137   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542219   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.542317   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542396   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542477   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.542535   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542697   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.668594   46388 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:51.675328   46388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:51.822660   46388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:51.830242   46388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:51.830318   46388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:51.846032   46388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:51.846067   46388 start.go:475] detecting cgroup driver to use...
	I0115 10:38:51.846147   46388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:51.863608   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:51.875742   46388 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:51.875810   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:51.888307   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:51.902327   46388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:52.027186   46388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:52.170290   46388 docker.go:233] disabling docker service ...
	I0115 10:38:52.170372   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:52.184106   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:52.195719   46388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:52.304630   46388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:52.420312   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:52.434213   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:52.453883   46388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:52.453946   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.464662   46388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:52.464726   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.474291   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.483951   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.493132   46388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:52.503668   46388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:52.512336   46388 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:52.512410   46388 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:52.529602   46388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:52.541735   46388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:52.664696   46388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:52.844980   46388 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:52.845051   46388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:52.850380   46388 start.go:543] Will wait 60s for crictl version
	I0115 10:38:52.850463   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:52.854500   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:52.890488   46388 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:52.890595   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:52.944999   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:53.005494   46388 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0115 10:38:54.017897   46387 retry.go:31] will retry after 11.956919729s: kubelet not initialised
	I0115 10:38:53.006783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:53.009509   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.009903   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:53.009934   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.010135   46388 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:53.014612   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:53.029014   46388 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 10:38:53.029063   46388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:53.073803   46388 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0115 10:38:53.073839   46388 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:38:53.073906   46388 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.073943   46388 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.073979   46388 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.073945   46388 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.073914   46388 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.073932   46388 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.073931   46388 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.073918   46388 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075357   46388 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.075478   46388 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.075515   46388 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.075532   46388 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.075482   46388 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.075483   46388 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.234170   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.248000   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0115 10:38:53.264387   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.289576   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.303961   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.326078   46388 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0115 10:38:53.326132   46388 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.326176   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.331268   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.334628   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.366099   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.426012   46388 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0115 10:38:53.426058   46388 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.426106   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.426316   46388 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0115 10:38:53.426346   46388 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.426377   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505102   46388 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0115 10:38:53.505194   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.505201   46388 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.505286   46388 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0115 10:38:53.505358   46388 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.505410   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505334   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.507596   46388 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0115 10:38:53.507630   46388 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.507674   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.544052   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.544142   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.544078   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.544263   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.544458   46388 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0115 10:38:53.544505   46388 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.544550   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.568682   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0115 10:38:53.568786   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.568808   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.681576   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681671   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0115 10:38:53.681777   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:53.681779   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681918   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.681990   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.682040   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0115 10:38:53.682108   46388 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681996   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.682157   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681927   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0115 10:38:53.682277   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:53.728102   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:53.728204   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:49.944443   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:49.944529   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.445085   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.945395   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.444784   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.944622   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.444886   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.460961   47063 api_server.go:72] duration metric: took 2.516519088s to wait for apiserver process to appear ...
	I0115 10:38:52.460980   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:52.460996   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:52.461498   47063 api_server.go:269] stopped: https://192.168.39.125:8444/healthz: Get "https://192.168.39.125:8444/healthz": dial tcp 192.168.39.125:8444: connect: connection refused
	I0115 10:38:52.961968   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:53.672555   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:55.685156   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:56.172493   46584 pod_ready.go:92] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.172521   46584 pod_ready.go:81] duration metric: took 9.010249071s waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.172534   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.178057   46584 pod_ready.go:97] error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178080   46584 pod_ready.go:81] duration metric: took 5.538491ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:56.178092   46584 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178100   46584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185048   46584 pod_ready.go:92] pod "etcd-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.185071   46584 pod_ready.go:81] duration metric: took 6.962528ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185082   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190244   46584 pod_ready.go:92] pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.190263   46584 pod_ready.go:81] duration metric: took 5.173778ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190275   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196537   46584 pod_ready.go:92] pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.196555   46584 pod_ready.go:81] duration metric: took 6.272551ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196566   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367735   46584 pod_ready.go:92] pod "kube-proxy-jqgfc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.367766   46584 pod_ready.go:81] duration metric: took 171.191874ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367779   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.209201   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.209232   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.209247   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.283870   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.283914   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.461166   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.476935   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.476968   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:56.961147   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.966917   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.966949   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:57.461505   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:57.467290   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:38:57.482673   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:57.482709   47063 api_server.go:131] duration metric: took 5.021721974s to wait for apiserver health ...
	I0115 10:38:57.482721   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:57.482729   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:57.484809   47063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:57.486522   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:57.503036   47063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:57.523094   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:57.539289   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:57.539332   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:57.539342   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:57.539353   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:57.539361   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:57.539367   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:57.539372   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:57.539378   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:57.539392   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:57.539400   47063 system_pods.go:74] duration metric: took 16.288236ms to wait for pod list to return data ...
	I0115 10:38:57.539415   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:57.547016   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:57.547043   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:57.547053   47063 node_conditions.go:105] duration metric: took 7.632954ms to run NodePressure ...
	I0115 10:38:57.547069   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:57.838097   47063 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847919   47063 kubeadm.go:787] kubelet initialised
	I0115 10:38:57.847945   47063 kubeadm.go:788] duration metric: took 9.818012ms waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847960   47063 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:57.860753   47063 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.866623   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866666   47063 pod_ready.go:81] duration metric: took 5.881593ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.866679   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866687   47063 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.873742   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873772   47063 pod_ready.go:81] duration metric: took 7.070689ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.873787   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873795   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.881283   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881313   47063 pod_ready.go:81] duration metric: took 7.502343ms waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.881328   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881335   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.927473   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927504   47063 pod_ready.go:81] duration metric: took 46.159848ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.927516   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927523   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.329002   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329029   47063 pod_ready.go:81] duration metric: took 401.499694ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.329039   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329046   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.727362   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727394   47063 pod_ready.go:81] duration metric: took 398.336577ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.727411   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727420   47063 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:59.138162   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138195   47063 pod_ready.go:81] duration metric: took 410.766568ms waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:59.138207   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138214   47063 pod_ready.go:38] duration metric: took 1.290244752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:59.138232   47063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:59.173438   47063 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:59.173463   47063 kubeadm.go:640] restartCluster took 20.622435902s
	I0115 10:38:59.173473   47063 kubeadm.go:406] StartCluster complete in 20.676611158s
	I0115 10:38:59.173494   47063 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.173598   47063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:59.176160   47063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.176389   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:59.176558   47063 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:59.176645   47063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176652   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:59.176680   47063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.176696   47063 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:59.176706   47063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176725   47063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-709012"
	I0115 10:38:59.176768   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177130   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177163   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177188   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177220   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177254   47063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.177288   47063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.177305   47063 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:59.177390   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177796   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177849   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.182815   47063 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-709012" context rescaled to 1 replicas
	I0115 10:38:59.182849   47063 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:59.184762   47063 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:59.186249   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:59.196870   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0115 10:38:59.197111   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37331
	I0115 10:38:59.197493   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.197595   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.198074   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198096   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198236   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198264   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198410   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.198620   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.198634   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.199252   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.199278   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.202438   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0115 10:38:59.202957   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.203462   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.203489   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.203829   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.204271   47063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.204295   47063 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:59.204322   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.204406   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204434   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.204728   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204768   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.220973   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0115 10:38:59.221383   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.221873   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.221898   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.222330   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.222537   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.223337   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0115 10:38:59.223746   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35993
	I0115 10:38:59.224454   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.224557   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.227071   47063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:59.225090   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.225234   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.228609   47063 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.228624   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:59.228638   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.228668   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229046   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.229064   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229415   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229515   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229671   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.230070   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.230093   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.232470   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.233532   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.235985   47063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:56.206357   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.524032218s)
	I0115 10:38:56.206399   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0115 10:38:56.206444   46388 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: (2.52429359s)
	I0115 10:38:56.206494   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206580   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.524566038s)
	I0115 10:38:56.206594   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206609   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206684   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.52488513s)
	I0115 10:38:56.206806   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0115 10:38:56.206718   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.524535788s)
	I0115 10:38:56.206824   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0115 10:38:56.206756   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.524930105s)
	I0115 10:38:56.206843   46388 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.206863   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206780   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.478563083s)
	I0115 10:38:56.206890   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206908   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.986404   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0115 10:38:56.986480   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0115 10:38:56.986513   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:56.986555   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:59.063376   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.076785591s)
	I0115 10:38:59.063421   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0115 10:38:59.063449   46388 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.063494   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.234530   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.234543   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.237273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.237334   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:59.237349   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:59.237367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.237458   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.237624   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.237776   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.240471   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242356   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.242442   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.242483   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242538   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.245246   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.245394   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.251844   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I0115 10:38:59.252344   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.252855   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.252876   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.253245   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.253439   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.255055   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.255299   47063 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.255315   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:59.255331   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.258732   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259370   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.259408   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259554   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.259739   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.259915   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.260060   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.380593   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:59.380623   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:59.387602   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.409765   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.434624   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:59.434655   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:59.514744   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:59.514778   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:59.528401   47063 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:59.528428   47063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:38:59.552331   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:00.775084   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.365286728s)
	I0115 10:39:00.775119   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387483878s)
	I0115 10:39:00.775251   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775268   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775195   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775319   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775697   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775737   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775778   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.775791   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.775805   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775818   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.776009   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.776030   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778922   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.778939   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778949   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.778959   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.779172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.780377   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.780395   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.787873   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.787893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.788142   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.788161   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.882725   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330338587s)
	I0115 10:39:00.882775   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.882792   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883118   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883140   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883144   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.883150   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.883166   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883494   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883513   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883523   47063 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-709012"
	I0115 10:39:00.887782   47063 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:56.767524   46584 pod_ready.go:92] pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.767555   46584 pod_ready.go:81] duration metric: took 399.766724ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.767569   46584 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.776515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:00.777313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:03.358192   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.294671295s)
	I0115 10:39:03.358221   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0115 10:39:03.358249   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:03.358296   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:00.889422   47063 addons.go:505] enable addons completed in 1.71286662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:01.533332   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.534081   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.274613   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.277132   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.981700   46387 kubeadm.go:787] kubelet initialised
	I0115 10:39:05.981726   46387 kubeadm.go:788] duration metric: took 49.462651853s waiting for restarted kubelet to initialise ...
	I0115 10:39:05.981737   46387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:05.987142   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993872   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.993896   46387 pod_ready.go:81] duration metric: took 6.725677ms waiting for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993920   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999103   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.999133   46387 pod_ready.go:81] duration metric: took 5.204706ms waiting for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999148   46387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004449   46387 pod_ready.go:92] pod "etcd-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.004472   46387 pod_ready.go:81] duration metric: took 5.315188ms waiting for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004484   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010187   46387 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.010209   46387 pod_ready.go:81] duration metric: took 5.716918ms waiting for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010221   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380715   46387 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.380742   46387 pod_ready.go:81] duration metric: took 370.513306ms waiting for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380756   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780865   46387 pod_ready.go:92] pod "kube-proxy-w9fdn" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.780887   46387 pod_ready.go:81] duration metric: took 400.122851ms waiting for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780899   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179755   46387 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.179785   46387 pod_ready.go:81] duration metric: took 398.879027ms waiting for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179798   46387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.188315   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.429866   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.071542398s)
	I0115 10:39:05.429896   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0115 10:39:05.429927   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:05.429988   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:08.115120   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.685106851s)
	I0115 10:39:08.115147   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0115 10:39:08.115179   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:08.115226   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:05.540836   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:07.032884   47063 node_ready.go:49] node "default-k8s-diff-port-709012" has status "Ready":"True"
	I0115 10:39:07.032914   47063 node_ready.go:38] duration metric: took 7.504464113s waiting for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:39:07.032928   47063 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:07.042672   47063 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048131   47063 pod_ready.go:92] pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.048156   47063 pod_ready.go:81] duration metric: took 5.456337ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048167   47063 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053470   47063 pod_ready.go:92] pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.053492   47063 pod_ready.go:81] duration metric: took 5.316882ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053504   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.061828   47063 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.562201   47063 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.562235   47063 pod_ready.go:81] duration metric: took 2.508719163s waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.562248   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571588   47063 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.571614   47063 pod_ready.go:81] duration metric: took 9.356396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571628   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580269   47063 pod_ready.go:92] pod "kube-proxy-d8lcq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.580291   47063 pod_ready.go:81] duration metric: took 8.654269ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580305   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833621   47063 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.833646   47063 pod_ready.go:81] duration metric: took 253.332081ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833658   47063 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.776707   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.777515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.687740   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.187565   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.092236   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.976986955s)
	I0115 10:39:11.092266   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0115 10:39:11.092290   46388 cache_images.go:123] Successfully loaded all cached images
	I0115 10:39:11.092296   46388 cache_images.go:92] LoadImages completed in 18.018443053s
	I0115 10:39:11.092373   46388 ssh_runner.go:195] Run: crio config
	I0115 10:39:11.155014   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:11.155036   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:11.155056   46388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:39:11.155074   46388 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-824502 NodeName:no-preload-824502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:39:11.155203   46388 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-824502"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:39:11.155265   46388 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-824502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:39:11.155316   46388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0115 10:39:11.165512   46388 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:39:11.165586   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:39:11.175288   46388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0115 10:39:11.192730   46388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0115 10:39:11.209483   46388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0115 10:39:11.228296   46388 ssh_runner.go:195] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0115 10:39:11.232471   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:39:11.245041   46388 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502 for IP: 192.168.50.136
	I0115 10:39:11.245106   46388 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:11.245298   46388 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:39:11.245364   46388 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:39:11.245456   46388 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.key
	I0115 10:39:11.245551   46388 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key.cb5546de
	I0115 10:39:11.245617   46388 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key
	I0115 10:39:11.245769   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:39:11.245808   46388 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:39:11.245823   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:39:11.245855   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:39:11.245895   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:39:11.245937   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:39:11.246018   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:39:11.246987   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:39:11.272058   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:39:11.295425   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:39:11.320271   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:39:11.347161   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:39:11.372529   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:39:11.396765   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:39:11.419507   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:39:11.441814   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:39:11.463306   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:39:11.485830   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:39:11.510306   46388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:39:11.527095   46388 ssh_runner.go:195] Run: openssl version
	I0115 10:39:11.532483   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:39:11.543447   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548266   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548330   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.554228   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:39:11.564891   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:39:11.574809   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579217   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579257   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.584745   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:39:11.596117   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:39:11.606888   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611567   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611632   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.617307   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:39:11.627893   46388 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:39:11.632530   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:39:11.638562   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:39:11.644605   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:39:11.650917   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:39:11.656970   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:39:11.662948   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:39:11.669010   46388 kubeadm.go:404] StartCluster: {Name:no-preload-824502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:39:11.669093   46388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:39:11.669144   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:11.707521   46388 cri.go:89] found id: ""
	I0115 10:39:11.707594   46388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:39:11.719407   46388 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:39:11.719445   46388 kubeadm.go:636] restartCluster start
	I0115 10:39:11.719511   46388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:39:11.729609   46388 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.730839   46388 kubeconfig.go:92] found "no-preload-824502" server: "https://192.168.50.136:8443"
	I0115 10:39:11.733782   46388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:39:11.744363   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:11.744437   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:11.757697   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.245289   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.245389   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.258680   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.745234   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.745334   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.757934   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.244459   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.244549   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.256860   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.745400   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.745486   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.759185   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:14.244696   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.244774   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.257692   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.842044   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.339850   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.779637   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.278260   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.187668   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.187834   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.745104   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.745191   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.757723   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.244680   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.244760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.259042   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.744599   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.744692   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.761497   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.245412   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.245507   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.260040   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.744664   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.744752   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.757209   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.244739   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.244826   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.257922   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.744411   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.744528   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.756304   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.244475   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.244580   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.257372   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.744977   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.745072   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.758201   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:19.244832   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.244906   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.257468   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.342438   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.845282   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.776399   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.276057   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:20.686392   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:22.687613   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.745014   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.745076   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.757274   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.245246   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.245307   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.257735   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.745333   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.745422   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.757945   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.245022   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.245112   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.257351   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.744980   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.745057   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.756073   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.756099   46388 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:39:21.756107   46388 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:39:21.756116   46388 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:39:21.756167   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:21.800172   46388 cri.go:89] found id: ""
	I0115 10:39:21.800229   46388 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:39:21.815607   46388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:39:21.826460   46388 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:39:21.826525   46388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835735   46388 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835758   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:21.963603   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.673572   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.882139   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.975846   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:23.061284   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:39:23.061391   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:23.561760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.061736   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.562127   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:21.340520   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.340897   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:21.776123   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.776196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.777003   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:24.688163   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.187371   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.061818   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.561582   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.584837   46388 api_server.go:72] duration metric: took 2.523550669s to wait for apiserver process to appear ...
	I0115 10:39:25.584868   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:39:25.584893   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.585385   46388 api_server.go:269] stopped: https://192.168.50.136:8443/healthz: Get "https://192.168.50.136:8443/healthz": dial tcp 192.168.50.136:8443: connect: connection refused
	I0115 10:39:26.085248   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.546970   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.547007   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.547026   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.597433   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.597466   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.597482   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.342652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.343320   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.840652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.625537   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:29.625587   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.085614   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.091715   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.091745   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.585298   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.591889   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.591919   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:31.086028   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:31.091297   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:39:31.099702   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:39:31.099726   46388 api_server.go:131] duration metric: took 5.514851771s to wait for apiserver health ...
	I0115 10:39:31.099735   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:31.099741   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:31.102193   46388 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:39:28.275539   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:30.276634   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.104002   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:39:31.130562   46388 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:39:31.163222   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:39:31.186170   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:39:31.186201   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:39:31.186212   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:39:31.186222   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:39:31.186231   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:39:31.186242   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:39:31.186252   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:39:31.186263   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:39:31.186276   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:39:31.186284   46388 system_pods.go:74] duration metric: took 23.040188ms to wait for pod list to return data ...
	I0115 10:39:31.186292   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:39:31.215529   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:39:31.215567   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:39:31.215584   46388 node_conditions.go:105] duration metric: took 29.283674ms to run NodePressure ...
	I0115 10:39:31.215615   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:31.584238   46388 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590655   46388 kubeadm.go:787] kubelet initialised
	I0115 10:39:31.590679   46388 kubeadm.go:788] duration metric: took 6.418412ms waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590688   46388 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:31.603892   46388 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.612449   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612484   46388 pod_ready.go:81] duration metric: took 8.567896ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.612497   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612507   46388 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.622651   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622678   46388 pod_ready.go:81] duration metric: took 10.161967ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.622690   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622698   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.633893   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633917   46388 pod_ready.go:81] duration metric: took 11.210807ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.633929   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633937   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.639395   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639423   46388 pod_ready.go:81] duration metric: took 5.474128ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.639434   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639442   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.989202   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989242   46388 pod_ready.go:81] duration metric: took 349.786667ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.989255   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989264   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.387200   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387227   46388 pod_ready.go:81] duration metric: took 397.955919ms waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.387236   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387243   46388 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.789213   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789235   46388 pod_ready.go:81] duration metric: took 401.985079ms waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.789245   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789252   46388 pod_ready.go:38] duration metric: took 1.198551697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:32.789271   46388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:39:32.802883   46388 ops.go:34] apiserver oom_adj: -16
	I0115 10:39:32.802901   46388 kubeadm.go:640] restartCluster took 21.083448836s
	I0115 10:39:32.802908   46388 kubeadm.go:406] StartCluster complete in 21.133905255s
	I0115 10:39:32.802921   46388 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.802997   46388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:39:32.804628   46388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.804880   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:39:32.804990   46388 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:39:32.805075   46388 addons.go:69] Setting storage-provisioner=true in profile "no-preload-824502"
	I0115 10:39:32.805094   46388 addons.go:234] Setting addon storage-provisioner=true in "no-preload-824502"
	W0115 10:39:32.805102   46388 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:39:32.805108   46388 addons.go:69] Setting default-storageclass=true in profile "no-preload-824502"
	I0115 10:39:32.805128   46388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-824502"
	I0115 10:39:32.805128   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:39:32.805137   46388 addons.go:69] Setting metrics-server=true in profile "no-preload-824502"
	I0115 10:39:32.805165   46388 addons.go:234] Setting addon metrics-server=true in "no-preload-824502"
	I0115 10:39:32.805172   46388 host.go:66] Checking if "no-preload-824502" exists ...
	W0115 10:39:32.805175   46388 addons.go:243] addon metrics-server should already be in state true
	I0115 10:39:32.805219   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.805564   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805565   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805597   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805602   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805616   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805698   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.809596   46388 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-824502" context rescaled to 1 replicas
	I0115 10:39:32.809632   46388 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:39:32.812135   46388 out.go:177] * Verifying Kubernetes components...
	I0115 10:39:32.814392   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:39:32.823244   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	I0115 10:39:32.823758   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.823864   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0115 10:39:32.824287   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824306   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.824351   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.824693   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.824816   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.824833   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824857   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.825184   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.825778   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.825823   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.827847   46388 addons.go:234] Setting addon default-storageclass=true in "no-preload-824502"
	W0115 10:39:32.827864   46388 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:39:32.827883   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.828242   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.828286   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.838537   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0115 10:39:32.839162   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.839727   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.839747   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.841293   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.841862   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.841899   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.844309   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0115 10:39:32.844407   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32997
	I0115 10:39:32.844654   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.844941   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.845132   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845156   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.845712   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.845881   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845894   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.846316   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.846347   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.846910   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.847189   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.849126   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.851699   46388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:39:32.853268   46388 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:32.853284   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:39:32.853305   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.855997   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856372   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.856394   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856569   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.856716   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.856853   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.856975   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.861396   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I0115 10:39:32.861893   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.862379   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.862409   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.862874   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.863050   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.864195   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37983
	I0115 10:39:32.864480   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.866714   46388 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:39:32.864849   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.868242   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:39:32.868256   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:39:32.868274   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.868596   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.868613   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.869057   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.869306   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.870918   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.871163   46388 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:32.871177   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:39:32.871192   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.871252   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871670   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.871691   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871958   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.872127   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.872288   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.872463   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.874381   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875287   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.875314   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875478   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.875624   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.875786   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.875893   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.982357   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:33.059016   46388 node_ready.go:35] waiting up to 6m0s for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:33.059259   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:39:33.059281   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:39:33.060796   46388 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:39:33.060983   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:33.110608   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:39:33.110633   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:39:33.154857   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:33.154886   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:39:33.198495   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:34.178167   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.117123302s)
	I0115 10:39:34.178220   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178234   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178312   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.19592253s)
	I0115 10:39:34.178359   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178372   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178649   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178669   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178687   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178712   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178723   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178735   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178691   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178800   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178811   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178823   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178982   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179001   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.179003   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179040   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179057   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179075   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.186855   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.186875   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.187114   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.187135   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.187154   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.293778   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095231157s)
	I0115 10:39:34.293837   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.293861   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294161   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294184   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294194   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.294203   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294451   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294475   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294487   46388 addons.go:470] Verifying addon metrics-server=true in "no-preload-824502"
	I0115 10:39:34.296653   46388 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:39:29.687541   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.689881   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.692248   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.298179   46388 addons.go:505] enable addons completed in 1.493195038s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:31.842086   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.843601   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:32.775651   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.778997   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:36.186700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.688932   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:35.063999   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:37.068802   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:39.564287   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:36.341901   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.344615   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:37.278252   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:39.780035   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:41.186854   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.687410   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:40.063481   46388 node_ready.go:49] node "no-preload-824502" has status "Ready":"True"
	I0115 10:39:40.063509   46388 node_ready.go:38] duration metric: took 7.00445832s waiting for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:40.063521   46388 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:40.069733   46388 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077511   46388 pod_ready.go:92] pod "coredns-76f75df574-ft2wt" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.077539   46388 pod_ready.go:81] duration metric: took 7.783253ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077549   46388 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082665   46388 pod_ready.go:92] pod "etcd-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.082693   46388 pod_ready.go:81] duration metric: took 5.137636ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082704   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087534   46388 pod_ready.go:92] pod "kube-apiserver-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.087552   46388 pod_ready.go:81] duration metric: took 4.840583ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087563   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092447   46388 pod_ready.go:92] pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.092473   46388 pod_ready.go:81] duration metric: took 4.90114ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092493   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464047   46388 pod_ready.go:92] pod "kube-proxy-nlk2h" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.464065   46388 pod_ready.go:81] duration metric: took 371.565815ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464075   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:42.472255   46388 pod_ready.go:102] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.471011   46388 pod_ready.go:92] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:43.471033   46388 pod_ready.go:81] duration metric: took 3.006951578s waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:43.471045   46388 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.841668   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.842151   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.277636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:44.787510   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:46.187891   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:48.687578   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.478255   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.978120   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.340455   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.341486   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.840829   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.275430   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.776946   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.188236   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.686748   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.980682   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:52.479488   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.840971   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.841513   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.778023   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.275602   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:55.687892   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.186665   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.978059   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.978213   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.978881   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.341772   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.841021   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.775700   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:59.274671   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:01.280895   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.186976   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:02.688712   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.978942   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.482480   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.841912   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.340823   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.775015   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.776664   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.185744   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.185877   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:09.187192   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.979141   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.479235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.840997   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.842100   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.278110   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.775278   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:11.686672   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.187037   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.978475   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.978621   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.346343   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.841357   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.841981   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:13.278313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:15.777340   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.188343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.687840   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.979177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.981550   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.478364   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:17.340973   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.341317   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.275525   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:20.277493   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.187342   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.693743   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.480386   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.481947   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.341650   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.841949   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:22.777674   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.273859   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:26.186846   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.188206   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.978266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.979824   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.842629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.341954   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.274109   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:29.275517   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:31.277396   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.688520   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.187343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.478712   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:32.978549   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.843559   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.340435   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.278639   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.777051   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.186611   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:34.978720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:37.488790   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.841994   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.340074   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.278319   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.776206   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:39.978911   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.478331   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.187741   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.687320   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.340766   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.341909   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.843116   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.777726   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.777953   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:45.188685   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.687270   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.978841   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.477932   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.478482   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.340237   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.341936   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.275247   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.777753   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.688548   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.187385   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.188261   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.478562   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.978677   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.840537   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.842188   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.278594   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.774847   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.687614   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:59.186203   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.479325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.979266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.340295   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.342857   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.776968   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.777421   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.278730   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.186645   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.187583   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.478127   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.478816   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:00.841474   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.340255   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.775648   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.779261   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.687557   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.688081   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.979671   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.478240   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.345230   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.841561   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:09.841629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.275641   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.276466   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.187771   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.688852   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.478832   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.978808   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:11.841717   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.341355   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.775133   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.274677   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.186001   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.186387   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.186931   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.979099   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.478539   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:16.841294   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:18.842244   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.776623   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:20.274196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.187095   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.689700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.978471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.478169   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.479319   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.341851   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.343663   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.275134   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.276420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.185307   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.186549   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.978977   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.979239   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:25.840539   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:27.840819   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:29.842580   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.775069   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.775244   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.275239   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:30.187482   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.687454   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.478330   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.479265   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.340974   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.342201   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.275561   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.775652   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.687487   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.689628   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:39.186244   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.979235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.981609   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.342452   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:38.841213   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.775893   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.274573   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.186313   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.687042   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.478993   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.479953   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.341359   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.840325   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.775636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.275821   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.687911   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.186598   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:44.977946   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:46.980471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.477591   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.841849   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.341443   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:47.276441   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.775182   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.687273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.187451   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.480325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.979440   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.841657   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.341257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.776199   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:54.274920   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.188121   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.191970   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.478903   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:58.979288   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.341479   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.841144   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.841215   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.775625   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.276127   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.687860   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:02.188506   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.480582   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:03.977715   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.841608   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.340546   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.775220   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.274050   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.277327   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.688269   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.187187   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:05.977760   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.978356   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.340629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.341333   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.775130   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.776410   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.686836   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.187035   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.187814   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.978478   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.477854   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.477883   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.341625   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.841300   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.842745   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:13.276029   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:15.774949   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.686998   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.689531   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.478177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.978154   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.844053   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:19.339915   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:17.775988   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:20.276213   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.187144   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.188273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.479275   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.977720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.342019   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.343747   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:22.775222   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.274922   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.186701   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.979093   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.478022   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.843596   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.340257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:27.275420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:29.275918   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:31.276702   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.186796   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.686406   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.478933   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.978757   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.341780   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.842117   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:33.774432   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.775822   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:34.687304   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:36.687850   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.187956   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.478261   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.978198   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.341314   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.840626   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.842475   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:38.275042   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:40.774892   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.686479   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.688800   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.980119   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:42.478070   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.478709   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.844661   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.340617   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.278574   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:45.775324   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.185760   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.186399   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.479381   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.979086   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.842369   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:49.341153   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:47.776338   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.275329   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.187219   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.687370   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.479573   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.978568   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.840818   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.842279   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.776812   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:54.780747   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.187111   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:57.187263   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.478479   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.977687   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.846775   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.340913   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.768584   46584 pod_ready.go:81] duration metric: took 4m0.001000825s waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:42:56.768615   46584 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:42:56.768623   46584 pod_ready.go:38] duration metric: took 4m9.613401399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:42:56.768641   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:42:56.768686   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:42:56.768739   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:42:56.842276   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:56.842298   46584 cri.go:89] found id: ""
	I0115 10:42:56.842309   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:42:56.842361   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.847060   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:42:56.847118   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:42:56.887059   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:56.887092   46584 cri.go:89] found id: ""
	I0115 10:42:56.887100   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:42:56.887158   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.893238   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:42:56.893289   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:42:56.933564   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:56.933593   46584 cri.go:89] found id: ""
	I0115 10:42:56.933603   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:42:56.933657   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.937882   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:42:56.937958   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:42:56.980953   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:56.980989   46584 cri.go:89] found id: ""
	I0115 10:42:56.980999   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:42:56.981051   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.985008   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:42:56.985058   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:42:57.026275   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:57.026305   46584 cri.go:89] found id: ""
	I0115 10:42:57.026315   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:42:57.026373   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.030799   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:42:57.030885   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:42:57.071391   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:42:57.071416   46584 cri.go:89] found id: ""
	I0115 10:42:57.071424   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:42:57.071485   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.076203   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:42:57.076254   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:42:57.119035   46584 cri.go:89] found id: ""
	I0115 10:42:57.119062   46584 logs.go:284] 0 containers: []
	W0115 10:42:57.119069   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:42:57.119074   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:42:57.119129   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:42:57.167335   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:57.167355   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:57.167360   46584 cri.go:89] found id: ""
	I0115 10:42:57.167367   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:42:57.167411   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.171919   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.176255   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:42:57.176284   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:42:57.328501   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:42:57.328538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:57.390279   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:42:57.390309   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:57.886607   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:42:57.886645   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:42:57.937391   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:42:57.937420   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:42:58.001313   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:42:58.001348   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:42:58.016772   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:42:58.016804   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:58.060489   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:42:58.060516   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:58.102993   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:42:58.103043   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:58.140732   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:42:58.140764   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:58.191891   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:42:58.191927   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:58.235836   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:42:58.235861   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:58.277424   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:42:58.277465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:00.844771   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:00.862922   46584 api_server.go:72] duration metric: took 4m17.850865s to wait for apiserver process to appear ...
	I0115 10:43:00.862946   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:00.862992   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:00.863055   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:00.909986   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:00.910013   46584 cri.go:89] found id: ""
	I0115 10:43:00.910020   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:00.910066   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.915553   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:00.915634   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:00.969923   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:00.969951   46584 cri.go:89] found id: ""
	I0115 10:43:00.969961   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:00.970021   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.974739   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:00.974805   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:01.024283   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.024305   46584 cri.go:89] found id: ""
	I0115 10:43:01.024314   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:01.024366   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.029325   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:01.029388   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:01.070719   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.070746   46584 cri.go:89] found id: ""
	I0115 10:43:01.070755   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:01.070806   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.074906   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:01.074969   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:01.111715   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.111747   46584 cri.go:89] found id: ""
	I0115 10:43:01.111756   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:01.111805   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.116173   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:01.116225   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:01.157760   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.157791   46584 cri.go:89] found id: ""
	I0115 10:43:01.157802   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:01.157866   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.161944   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:01.162010   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:01.201888   46584 cri.go:89] found id: ""
	I0115 10:43:01.201915   46584 logs.go:284] 0 containers: []
	W0115 10:43:01.201925   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:01.201932   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:01.201990   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:01.244319   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.244346   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.244352   46584 cri.go:89] found id: ""
	I0115 10:43:01.244361   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:01.244454   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.248831   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.253617   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:01.253643   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:01.309426   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:01.309465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.346755   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:01.346789   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.385238   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:01.385266   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.423907   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:01.423941   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.480867   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:01.480902   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:01.538367   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:01.538403   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.580240   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:01.580273   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.622561   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:01.622602   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:01.675436   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:01.675463   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:59.687714   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.186463   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.982902   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:03.478178   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.840619   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.841154   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:04.842905   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.080545   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:02.080578   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:02.144713   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:02.144756   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:02.160120   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:02.160147   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:04.776113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:43:04.782741   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:43:04.783959   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:04.783979   46584 api_server.go:131] duration metric: took 3.92102734s to wait for apiserver health ...
	I0115 10:43:04.783986   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:04.784019   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:04.784071   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:04.832660   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:04.832685   46584 cri.go:89] found id: ""
	I0115 10:43:04.832695   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:04.832750   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.836959   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:04.837009   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:04.878083   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:04.878103   46584 cri.go:89] found id: ""
	I0115 10:43:04.878110   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:04.878160   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.882581   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:04.882642   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:04.927778   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:04.927798   46584 cri.go:89] found id: ""
	I0115 10:43:04.927805   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:04.927848   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.932822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:04.932891   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:04.975930   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:04.975955   46584 cri.go:89] found id: ""
	I0115 10:43:04.975965   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:04.976010   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.980744   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:04.980803   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:05.024300   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.024325   46584 cri.go:89] found id: ""
	I0115 10:43:05.024332   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:05.024383   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.029091   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:05.029159   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:05.081239   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.081264   46584 cri.go:89] found id: ""
	I0115 10:43:05.081273   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:05.081332   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.085822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:05.085879   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:05.126839   46584 cri.go:89] found id: ""
	I0115 10:43:05.126884   46584 logs.go:284] 0 containers: []
	W0115 10:43:05.126896   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:05.126903   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:05.126963   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:05.168241   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.168269   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.168276   46584 cri.go:89] found id: ""
	I0115 10:43:05.168285   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:05.168343   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.173309   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.177144   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:05.177164   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:05.239116   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:05.239148   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:05.368712   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:05.368745   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:05.429504   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:05.429540   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:05.473181   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:05.473216   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.510948   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:05.510974   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.551052   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:05.551082   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.606711   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:05.606746   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:05.661634   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:05.661663   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:05.675627   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:05.675656   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:05.736266   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:05.736305   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.775567   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:05.775597   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:06.111495   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:06.111531   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:08.661238   46584 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:08.661275   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.661282   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.661288   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.661294   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.661300   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.661306   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.661316   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.661324   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.661335   46584 system_pods.go:74] duration metric: took 3.877343546s to wait for pod list to return data ...
	I0115 10:43:08.661342   46584 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:08.664367   46584 default_sa.go:45] found service account: "default"
	I0115 10:43:08.664393   46584 default_sa.go:55] duration metric: took 3.04125ms for default service account to be created ...
	I0115 10:43:08.664408   46584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:08.672827   46584 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:08.672852   46584 system_pods.go:89] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.672860   46584 system_pods.go:89] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.672867   46584 system_pods.go:89] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.672873   46584 system_pods.go:89] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.672879   46584 system_pods.go:89] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.672885   46584 system_pods.go:89] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.672895   46584 system_pods.go:89] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.672906   46584 system_pods.go:89] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.672920   46584 system_pods.go:126] duration metric: took 8.505614ms to wait for k8s-apps to be running ...
	I0115 10:43:08.672933   46584 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:08.672984   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:08.690592   46584 system_svc.go:56] duration metric: took 17.651896ms WaitForService to wait for kubelet.
	I0115 10:43:08.690618   46584 kubeadm.go:581] duration metric: took 4m25.678563679s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:08.690640   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:08.694652   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:08.694679   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:08.694692   46584 node_conditions.go:105] duration metric: took 4.045505ms to run NodePressure ...
	I0115 10:43:08.694705   46584 start.go:228] waiting for startup goroutines ...
	I0115 10:43:08.694713   46584 start.go:233] waiting for cluster config update ...
	I0115 10:43:08.694725   46584 start.go:242] writing updated cluster config ...
	I0115 10:43:08.694991   46584 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:08.747501   46584 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:08.750319   46584 out.go:177] * Done! kubectl is now configured to use "embed-certs-781270" cluster and "default" namespace by default
	I0115 10:43:04.686284   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:06.703127   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.180590   46387 pod_ready.go:81] duration metric: took 4m0.000776944s waiting for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:07.180624   46387 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0115 10:43:07.180644   46387 pod_ready.go:38] duration metric: took 4m1.198895448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:07.180669   46387 kubeadm.go:640] restartCluster took 5m11.875261334s
	W0115 10:43:07.180729   46387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0115 10:43:07.180765   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0115 10:43:05.479764   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.978536   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.343529   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841510   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841533   47063 pod_ready.go:81] duration metric: took 4m0.007868879s waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:09.841542   47063 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:09.841549   47063 pod_ready.go:38] duration metric: took 4m2.808610487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:09.841562   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:09.841584   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:09.841625   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:12.165729   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.984931075s)
	I0115 10:43:12.165790   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:12.178710   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:43:12.188911   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:43:12.199329   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:43:12.199377   46387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0115 10:43:12.411245   46387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 10:43:09.980448   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:12.478625   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:14.479234   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.904898   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:09.904921   47063 cri.go:89] found id: ""
	I0115 10:43:09.904930   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:09.904996   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.911493   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:09.911557   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:09.958040   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:09.958060   47063 cri.go:89] found id: ""
	I0115 10:43:09.958070   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:09.958122   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.962914   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:09.962972   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:10.033848   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:10.033875   47063 cri.go:89] found id: ""
	I0115 10:43:10.033885   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:10.033946   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.043173   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:10.043232   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:10.088380   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:10.088405   47063 cri.go:89] found id: ""
	I0115 10:43:10.088415   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:10.088478   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.094288   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:10.094350   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:10.145428   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:10.145453   47063 cri.go:89] found id: ""
	I0115 10:43:10.145463   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:10.145547   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.150557   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:10.150637   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:10.206875   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:10.206901   47063 cri.go:89] found id: ""
	I0115 10:43:10.206915   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:10.206971   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.211979   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:10.212039   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:10.260892   47063 cri.go:89] found id: ""
	I0115 10:43:10.260914   47063 logs.go:284] 0 containers: []
	W0115 10:43:10.260924   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:10.260936   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:10.260987   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:10.315938   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.315970   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:10.315978   47063 cri.go:89] found id: ""
	I0115 10:43:10.315987   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:10.316045   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.324077   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.332727   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:10.332756   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.376006   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:10.376034   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:10.967301   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:10.967337   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:11.033301   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:11.033327   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:11.091151   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:11.091184   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:11.145411   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:11.145447   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:11.194249   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:11.194274   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:11.373988   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:11.374020   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:11.442754   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:11.442788   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:11.486282   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:11.486315   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:11.547428   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:11.547464   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:11.560977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:11.561005   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:11.603150   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:11.603179   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.149324   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:14.166360   47063 api_server.go:72] duration metric: took 4m14.983478755s to wait for apiserver process to appear ...
	I0115 10:43:14.166391   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:14.166444   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:14.166504   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:14.211924   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:14.211950   47063 cri.go:89] found id: ""
	I0115 10:43:14.211961   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:14.212018   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.216288   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:14.216352   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:14.264237   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:14.264270   47063 cri.go:89] found id: ""
	I0115 10:43:14.264280   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:14.264338   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.268883   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:14.268947   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:14.329606   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:14.329631   47063 cri.go:89] found id: ""
	I0115 10:43:14.329639   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:14.329694   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.334069   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:14.334133   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:14.374753   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.374779   47063 cri.go:89] found id: ""
	I0115 10:43:14.374788   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:14.374842   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.380452   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:14.380529   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:14.422341   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:14.422371   47063 cri.go:89] found id: ""
	I0115 10:43:14.422380   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:14.422444   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.427106   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:14.427169   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:14.469410   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:14.469440   47063 cri.go:89] found id: ""
	I0115 10:43:14.469450   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:14.469511   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.475098   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:14.475216   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:14.533771   47063 cri.go:89] found id: ""
	I0115 10:43:14.533794   47063 logs.go:284] 0 containers: []
	W0115 10:43:14.533800   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:14.533805   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:14.533876   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:14.573458   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:14.573483   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:14.573490   47063 cri.go:89] found id: ""
	I0115 10:43:14.573498   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:14.573561   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.578186   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.583133   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:14.583157   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.631142   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:14.631180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:16.978406   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:18.979879   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:15.076904   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:15.076958   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:15.129739   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:15.129778   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:15.169656   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:15.169685   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:15.229569   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:15.229616   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:15.293037   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:15.293075   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:15.351198   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:15.351243   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:15.394604   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:15.394642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:15.451142   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:15.451180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:15.466108   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:15.466146   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:15.595576   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:15.595615   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:15.643711   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:15.643740   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.200861   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:43:18.207576   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:43:18.208943   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:18.208964   47063 api_server.go:131] duration metric: took 4.042566476s to wait for apiserver health ...
	I0115 10:43:18.208971   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:18.208992   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:18.209037   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:18.254324   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.254353   47063 cri.go:89] found id: ""
	I0115 10:43:18.254361   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:18.254405   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.258765   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:18.258844   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:18.303785   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.303811   47063 cri.go:89] found id: ""
	I0115 10:43:18.303820   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:18.303880   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.308940   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:18.309009   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:18.358850   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:18.358878   47063 cri.go:89] found id: ""
	I0115 10:43:18.358888   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:18.358954   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.363588   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:18.363656   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:18.412797   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.412820   47063 cri.go:89] found id: ""
	I0115 10:43:18.412828   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:18.412878   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.418704   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:18.418765   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:18.460050   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:18.460074   47063 cri.go:89] found id: ""
	I0115 10:43:18.460083   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:18.460138   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.465581   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:18.465642   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:18.516632   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.516656   47063 cri.go:89] found id: ""
	I0115 10:43:18.516665   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:18.516719   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.521873   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:18.521935   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:18.574117   47063 cri.go:89] found id: ""
	I0115 10:43:18.574145   47063 logs.go:284] 0 containers: []
	W0115 10:43:18.574154   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:18.574161   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:18.574222   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:18.630561   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.630593   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:18.630599   47063 cri.go:89] found id: ""
	I0115 10:43:18.630606   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:18.630666   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.636059   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.640707   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:18.640728   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.681635   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:18.681667   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:18.803880   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:18.803913   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.864605   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:18.864642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.918210   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:18.918250   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.960702   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:18.960733   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:19.013206   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:19.013242   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:19.070193   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:19.070230   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:19.087983   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:19.088023   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:19.150096   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:19.150132   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:19.196977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:19.197006   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:19.244166   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:19.244202   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:19.290314   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:19.290349   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:22.182766   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:22.182794   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.182801   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.182808   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.182814   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.182820   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.182826   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.182836   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.182848   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.182858   47063 system_pods.go:74] duration metric: took 3.973880704s to wait for pod list to return data ...
	I0115 10:43:22.182869   47063 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:22.186304   47063 default_sa.go:45] found service account: "default"
	I0115 10:43:22.186344   47063 default_sa.go:55] duration metric: took 3.464907ms for default service account to be created ...
	I0115 10:43:22.186354   47063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:22.192564   47063 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:22.192595   47063 system_pods.go:89] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.192604   47063 system_pods.go:89] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.192611   47063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.192620   47063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.192627   47063 system_pods.go:89] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.192634   47063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.192644   47063 system_pods.go:89] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.192651   47063 system_pods.go:89] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.192661   47063 system_pods.go:126] duration metric: took 6.301001ms to wait for k8s-apps to be running ...
	I0115 10:43:22.192669   47063 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:22.192720   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:22.210150   47063 system_svc.go:56] duration metric: took 17.476738ms WaitForService to wait for kubelet.
	I0115 10:43:22.210169   47063 kubeadm.go:581] duration metric: took 4m23.02729406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:22.210190   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:22.214086   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:22.214111   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:22.214124   47063 node_conditions.go:105] duration metric: took 3.928309ms to run NodePressure ...
	I0115 10:43:22.214137   47063 start.go:228] waiting for startup goroutines ...
	I0115 10:43:22.214146   47063 start.go:233] waiting for cluster config update ...
	I0115 10:43:22.214158   47063 start.go:242] writing updated cluster config ...
	I0115 10:43:22.214394   47063 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:22.264250   47063 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:22.267546   47063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-709012" cluster and "default" namespace by default
	I0115 10:43:20.980266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:23.478672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.109313   46387 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0115 10:43:26.109392   46387 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 10:43:26.109501   46387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 10:43:26.109621   46387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 10:43:26.109750   46387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 10:43:26.109926   46387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 10:43:26.110051   46387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 10:43:26.110114   46387 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0115 10:43:26.110201   46387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 10:43:26.112841   46387 out.go:204]   - Generating certificates and keys ...
	I0115 10:43:26.112937   46387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 10:43:26.113031   46387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 10:43:26.113142   46387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0115 10:43:26.113237   46387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0115 10:43:26.113336   46387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0115 10:43:26.113414   46387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0115 10:43:26.113530   46387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0115 10:43:26.113617   46387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0115 10:43:26.113717   46387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0115 10:43:26.113814   46387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0115 10:43:26.113867   46387 kubeadm.go:322] [certs] Using the existing "sa" key
	I0115 10:43:26.113959   46387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 10:43:26.114029   46387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 10:43:26.114128   46387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 10:43:26.114214   46387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 10:43:26.114289   46387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 10:43:26.114400   46387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 10:43:26.115987   46387 out.go:204]   - Booting up control plane ...
	I0115 10:43:26.116100   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 10:43:26.116240   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 10:43:26.116349   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 10:43:26.116476   46387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 10:43:26.116677   46387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 10:43:26.116792   46387 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.004579 seconds
	I0115 10:43:26.116908   46387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 10:43:26.117097   46387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 10:43:26.117187   46387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 10:43:26.117349   46387 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-206509 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0115 10:43:26.117437   46387 kubeadm.go:322] [bootstrap-token] Using token: zc1jed.g57dxx99f2u8lwfg
	I0115 10:43:26.118960   46387 out.go:204]   - Configuring RBAC rules ...
	I0115 10:43:26.119074   46387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 10:43:26.119258   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 10:43:26.119401   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 10:43:26.119538   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 10:43:26.119657   46387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 10:43:26.119723   46387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 10:43:26.119796   46387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 10:43:26.119809   46387 kubeadm.go:322] 
	I0115 10:43:26.119857   46387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 10:43:26.119863   46387 kubeadm.go:322] 
	I0115 10:43:26.119923   46387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 10:43:26.119930   46387 kubeadm.go:322] 
	I0115 10:43:26.119950   46387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 10:43:26.120002   46387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 10:43:26.120059   46387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 10:43:26.120078   46387 kubeadm.go:322] 
	I0115 10:43:26.120120   46387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 10:43:26.120185   46387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 10:43:26.120249   46387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 10:43:26.120255   46387 kubeadm.go:322] 
	I0115 10:43:26.120359   46387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0115 10:43:26.120426   46387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 10:43:26.120433   46387 kubeadm.go:322] 
	I0115 10:43:26.120512   46387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120660   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 \
	I0115 10:43:26.120687   46387 kubeadm.go:322]     --control-plane 	  
	I0115 10:43:26.120691   46387 kubeadm.go:322] 
	I0115 10:43:26.120757   46387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 10:43:26.120763   46387 kubeadm.go:322] 
	I0115 10:43:26.120831   46387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120969   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 10:43:26.120990   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:43:26.121000   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:43:26.122557   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:43:25.977703   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:27.979775   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.123754   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:43:26.133514   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:43:26.152666   46387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:43:26.152776   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.152794   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=old-k8s-version-206509 minikube.k8s.io/updated_at=2024_01_15T10_43_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.205859   46387 ops.go:34] apiserver oom_adj: -16
	I0115 10:43:26.398371   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.899064   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.398532   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.898380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.398986   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.899140   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.399224   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.898397   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.399321   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.899035   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.398549   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.898547   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.399096   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.898492   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.399077   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.899311   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:34.398839   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.980789   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:31.981727   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.479518   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.398611   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.898531   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.399422   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.898569   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.399432   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.399017   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.898561   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:39.398551   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.977916   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:38.978672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:39.899402   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.398556   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.898384   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:41.035213   46387 kubeadm.go:1088] duration metric: took 14.882479947s to wait for elevateKubeSystemPrivileges.
	I0115 10:43:41.035251   46387 kubeadm.go:406] StartCluster complete in 5m45.791159963s
	I0115 10:43:41.035271   46387 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.035357   46387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:43:41.037947   46387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.038220   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:43:41.038242   46387 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:43:41.038314   46387 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038317   46387 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038333   46387 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-206509"
	I0115 10:43:41.038334   46387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-206509"
	W0115 10:43:41.038341   46387 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:43:41.038389   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038388   46387 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038405   46387 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-206509"
	W0115 10:43:41.038428   46387 addons.go:243] addon metrics-server should already be in state true
	I0115 10:43:41.038446   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:43:41.038467   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038724   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038738   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038783   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038787   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038815   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038909   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.054942   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0115 10:43:41.055314   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.055844   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.055868   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.056312   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.056464   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0115 10:43:41.056853   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.056878   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.056910   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.057198   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0115 10:43:41.057317   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057341   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.057532   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.057682   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.057844   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.057955   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057979   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.058300   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.058921   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.058952   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.061947   46387 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-206509"
	W0115 10:43:41.061973   46387 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:43:41.061999   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.062381   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.062405   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.075135   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33773
	I0115 10:43:41.075593   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.075704   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0115 10:43:41.076514   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.076536   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.076723   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.077196   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.077219   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.077225   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077564   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077607   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.077723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.080161   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.080238   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.082210   46387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:43:41.083883   46387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:43:41.085452   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:43:41.085477   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:43:41.083855   46387 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.085496   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.085496   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:43:41.085511   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.086304   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0115 10:43:41.086675   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.087100   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.087120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.087465   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.087970   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.088011   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.090492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.091743   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092335   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092355   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092675   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092695   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092833   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.092969   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.093129   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.093233   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.094042   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.094209   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.094296   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.094372   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.105226   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0115 10:43:41.105644   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.106092   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.106120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.106545   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.106759   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.108735   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.109022   46387 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.109040   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:43:41.109057   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.112322   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112771   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.112797   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112914   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.113100   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.113279   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.113442   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.353016   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:43:41.353038   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:43:41.357846   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.365469   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.465358   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:43:41.465379   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:43:41.532584   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:41.532612   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:43:41.598528   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 10:43:41.605798   46387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-206509" context rescaled to 1 replicas
	I0115 10:43:41.605838   46387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:43:41.607901   46387 out.go:177] * Verifying Kubernetes components...
	I0115 10:43:41.609363   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:41.608778   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:42.634034   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268517129s)
	I0115 10:43:42.634071   46387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.024689682s)
	I0115 10:43:42.634090   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634095   46387 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.634103   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634046   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.035489058s)
	I0115 10:43:42.634140   46387 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0115 10:43:42.634200   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.276326924s)
	I0115 10:43:42.634228   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634243   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634451   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634495   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634515   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634525   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634534   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634540   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634557   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634570   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634580   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634589   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634896   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634912   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634967   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634997   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.635008   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.656600   46387 node_ready.go:49] node "old-k8s-version-206509" has status "Ready":"True"
	I0115 10:43:42.656629   46387 node_ready.go:38] duration metric: took 22.522223ms waiting for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.656640   46387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:42.714802   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.714834   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.715273   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.715277   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.715303   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.722261   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:42.792908   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183451396s)
	I0115 10:43:42.792964   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.792982   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793316   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793339   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793352   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.793361   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793580   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793625   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793638   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793649   46387 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-206509"
	I0115 10:43:42.796113   46387 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:43:42.798128   46387 addons.go:505] enable addons completed in 1.759885904s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:43:40.979360   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477862   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477895   46388 pod_ready.go:81] duration metric: took 4m0.006840717s waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:43.477906   46388 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:43.477915   46388 pod_ready.go:38] duration metric: took 4m3.414382685s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:43.477933   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:43.477963   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:43.478033   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:43.533796   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:43.533825   46388 cri.go:89] found id: ""
	I0115 10:43:43.533836   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:43.533893   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.540165   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:43.540224   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:43.576831   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:43.576853   46388 cri.go:89] found id: ""
	I0115 10:43:43.576861   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:43.576922   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.581556   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:43.581616   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:43.625292   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.625315   46388 cri.go:89] found id: ""
	I0115 10:43:43.625323   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:43.625371   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.630741   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:43.630803   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:43.682511   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:43.682553   46388 cri.go:89] found id: ""
	I0115 10:43:43.682563   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:43.682621   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.688126   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:43.688194   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:43.739847   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.739866   46388 cri.go:89] found id: ""
	I0115 10:43:43.739873   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:43.739919   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.744569   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:43.744635   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:43.787603   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:43.787627   46388 cri.go:89] found id: ""
	I0115 10:43:43.787635   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:43.787676   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.792209   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:43.792271   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:43.838530   46388 cri.go:89] found id: ""
	I0115 10:43:43.838557   46388 logs.go:284] 0 containers: []
	W0115 10:43:43.838568   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:43.838576   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:43.838636   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:43.885727   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:43.885755   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:43.885761   46388 cri.go:89] found id: ""
	I0115 10:43:43.885769   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:43.885822   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.891036   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.895462   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:43.895493   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.939544   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:43.939568   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.985944   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:43.985973   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:44.052893   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:44.052923   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:44.116539   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:44.116569   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:44.173390   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:44.173432   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:44.194269   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:44.194295   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:44.239908   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:44.239935   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:44.729495   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:46.231080   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:46.231100   46387 pod_ready.go:81] duration metric: took 3.50881186s waiting for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:46.231109   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:48.239378   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:44.737413   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:44.737445   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:44.891846   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:44.891875   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:44.951418   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:44.951453   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:45.000171   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:45.000201   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:45.041629   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:45.041657   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.586439   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:47.602078   46388 api_server.go:72] duration metric: took 4m14.792413378s to wait for apiserver process to appear ...
	I0115 10:43:47.602102   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:47.602138   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:47.602193   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:47.646259   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:47.646283   46388 cri.go:89] found id: ""
	I0115 10:43:47.646291   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:47.646346   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.650757   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:47.650830   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:47.691688   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.691715   46388 cri.go:89] found id: ""
	I0115 10:43:47.691724   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:47.691777   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.696380   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:47.696467   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:47.738315   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:47.738340   46388 cri.go:89] found id: ""
	I0115 10:43:47.738349   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:47.738402   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.742810   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:47.742870   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:47.783082   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:47.783114   46388 cri.go:89] found id: ""
	I0115 10:43:47.783124   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:47.783178   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.787381   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:47.787432   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:47.832325   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:47.832353   46388 cri.go:89] found id: ""
	I0115 10:43:47.832363   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:47.832420   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.836957   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:47.837014   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:47.877146   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:47.877169   46388 cri.go:89] found id: ""
	I0115 10:43:47.877178   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:47.877231   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.881734   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:47.881782   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:47.921139   46388 cri.go:89] found id: ""
	I0115 10:43:47.921169   46388 logs.go:284] 0 containers: []
	W0115 10:43:47.921180   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:47.921188   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:47.921236   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:47.959829   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:47.959857   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:47.959864   46388 cri.go:89] found id: ""
	I0115 10:43:47.959872   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:47.959924   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.964105   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.968040   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:47.968059   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:48.017234   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:48.017266   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:48.073552   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:48.073583   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:48.512500   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:48.512539   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:48.564545   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:48.564578   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:48.609739   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:48.609768   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:48.654076   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:48.654106   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:48.691287   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:48.691314   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:48.739023   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:48.739063   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:48.791976   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:48.792018   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:48.808633   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:48.808659   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:48.933063   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:48.933099   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:48.974794   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:48.974825   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:49.735197   46387 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735227   46387 pod_ready.go:81] duration metric: took 3.504112323s waiting for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:49.735237   46387 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735243   46387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740497   46387 pod_ready.go:92] pod "kube-proxy-lh96p" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:49.740515   46387 pod_ready.go:81] duration metric: took 5.267229ms waiting for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740525   46387 pod_ready.go:38] duration metric: took 7.083874855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:49.740537   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:49.740580   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:49.755697   46387 api_server.go:72] duration metric: took 8.149828702s to wait for apiserver process to appear ...
	I0115 10:43:49.755718   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:49.755731   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:43:49.762148   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:43:49.762995   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:43:49.763013   46387 api_server.go:131] duration metric: took 7.290279ms to wait for apiserver health ...
	I0115 10:43:49.763019   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:49.766597   46387 system_pods.go:59] 4 kube-system pods found
	I0115 10:43:49.766615   46387 system_pods.go:61] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.766620   46387 system_pods.go:61] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.766626   46387 system_pods.go:61] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.766631   46387 system_pods.go:61] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.766637   46387 system_pods.go:74] duration metric: took 3.613036ms to wait for pod list to return data ...
	I0115 10:43:49.766642   46387 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:49.768826   46387 default_sa.go:45] found service account: "default"
	I0115 10:43:49.768844   46387 default_sa.go:55] duration metric: took 2.197235ms for default service account to be created ...
	I0115 10:43:49.768850   46387 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:49.772271   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:49.772296   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.772304   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.772314   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.772321   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.772339   46387 retry.go:31] will retry after 223.439669ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.001140   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.001165   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.001170   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.001176   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.001181   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.001198   46387 retry.go:31] will retry after 329.400473ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.335362   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.335386   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.335391   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.335398   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.335403   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.335420   46387 retry.go:31] will retry after 466.919302ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.806617   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.806643   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.806649   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.806655   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.806660   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.806678   46387 retry.go:31] will retry after 596.303035ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.407231   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:51.407257   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:51.407264   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:51.407271   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:51.407275   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:51.407292   46387 retry.go:31] will retry after 688.903723ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.102330   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.102357   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.102364   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.102374   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.102382   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.102399   46387 retry.go:31] will retry after 817.783297ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.925586   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.925612   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.925620   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.925629   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.925636   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.925658   46387 retry.go:31] will retry after 797.004884ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:53.728788   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:53.728812   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:53.728817   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:53.728823   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:53.728827   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:53.728843   46387 retry.go:31] will retry after 1.021568746s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.528236   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:43:51.533236   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:43:51.534697   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:43:51.534714   46388 api_server.go:131] duration metric: took 3.932606059s to wait for apiserver health ...
	I0115 10:43:51.534721   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:51.534744   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:51.534796   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:51.571704   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.571730   46388 cri.go:89] found id: ""
	I0115 10:43:51.571740   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:51.571793   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.576140   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:51.576201   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:51.614720   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:51.614803   46388 cri.go:89] found id: ""
	I0115 10:43:51.614823   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:51.614909   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.620904   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:51.620966   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:51.659679   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.659711   46388 cri.go:89] found id: ""
	I0115 10:43:51.659721   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:51.659779   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.664223   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:51.664275   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:51.701827   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:51.701850   46388 cri.go:89] found id: ""
	I0115 10:43:51.701858   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:51.701915   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.707296   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:51.707354   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:51.745962   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:51.745989   46388 cri.go:89] found id: ""
	I0115 10:43:51.746006   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:51.746061   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.750872   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:51.750942   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:51.796600   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:51.796637   46388 cri.go:89] found id: ""
	I0115 10:43:51.796647   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:51.796697   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.801250   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:51.801321   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:51.845050   46388 cri.go:89] found id: ""
	I0115 10:43:51.845072   46388 logs.go:284] 0 containers: []
	W0115 10:43:51.845081   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:51.845087   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:51.845144   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:51.880907   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:51.880935   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:51.880942   46388 cri.go:89] found id: ""
	I0115 10:43:51.880951   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:51.880997   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.885202   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.889086   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:51.889108   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.939740   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:51.939770   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.977039   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:51.977068   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:52.024927   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:52.024960   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:52.071850   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:52.071882   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:52.123313   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:52.123343   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:52.137274   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:52.137297   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:52.260488   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:52.260525   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:52.301121   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:52.301156   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:52.346323   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:52.346349   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:52.402759   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:52.402788   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:52.457075   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:52.457103   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:52.811321   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:52.811359   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:55.374293   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:55.374327   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.374335   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.374342   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.374348   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.374354   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.374361   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.374371   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.374382   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.374394   46388 system_pods.go:74] duration metric: took 3.83966542s to wait for pod list to return data ...
	I0115 10:43:55.374407   46388 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:55.376812   46388 default_sa.go:45] found service account: "default"
	I0115 10:43:55.376833   46388 default_sa.go:55] duration metric: took 2.418755ms for default service account to be created ...
	I0115 10:43:55.376843   46388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:55.383202   46388 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:55.383227   46388 system_pods.go:89] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.383236   46388 system_pods.go:89] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.383244   46388 system_pods.go:89] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.383285   46388 system_pods.go:89] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.383297   46388 system_pods.go:89] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.383303   46388 system_pods.go:89] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.383314   46388 system_pods.go:89] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.383325   46388 system_pods.go:89] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.383338   46388 system_pods.go:126] duration metric: took 6.489813ms to wait for k8s-apps to be running ...
	I0115 10:43:55.383349   46388 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:55.383401   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:55.399074   46388 system_svc.go:56] duration metric: took 15.719638ms WaitForService to wait for kubelet.
	I0115 10:43:55.399096   46388 kubeadm.go:581] duration metric: took 4m22.589439448s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:55.399118   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:55.403855   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:55.403883   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:55.403896   46388 node_conditions.go:105] duration metric: took 4.771651ms to run NodePressure ...
	I0115 10:43:55.403908   46388 start.go:228] waiting for startup goroutines ...
	I0115 10:43:55.403917   46388 start.go:233] waiting for cluster config update ...
	I0115 10:43:55.403930   46388 start.go:242] writing updated cluster config ...
	I0115 10:43:55.404244   46388 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:55.453146   46388 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0115 10:43:55.455321   46388 out.go:177] * Done! kubectl is now configured to use "no-preload-824502" cluster and "default" namespace by default
	I0115 10:43:54.756077   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:54.756099   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:54.756104   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:54.756111   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:54.756116   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:54.756131   46387 retry.go:31] will retry after 1.152306172s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:55.913769   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:55.913792   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:55.913798   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:55.913804   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.913810   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:55.913826   46387 retry.go:31] will retry after 2.261296506s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:58.179679   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:58.179704   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:58.179710   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:58.179718   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:58.179722   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:58.179739   46387 retry.go:31] will retry after 2.012023518s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:00.197441   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:00.197471   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:00.197476   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:00.197483   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:00.197487   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:00.197505   46387 retry.go:31] will retry after 3.341619522s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:03.543730   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:03.543752   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:03.543757   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:03.543766   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:03.543771   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:03.543788   46387 retry.go:31] will retry after 2.782711895s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:06.332250   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:06.332276   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:06.332281   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:06.332288   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:06.332294   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:06.332310   46387 retry.go:31] will retry after 5.379935092s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:11.718269   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:11.718315   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:11.718324   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:11.718334   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:11.718343   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:11.718364   46387 retry.go:31] will retry after 6.238812519s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:17.963126   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:17.963150   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:17.963155   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:17.963162   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:17.963167   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:17.963183   46387 retry.go:31] will retry after 7.774120416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:25.743164   46387 system_pods.go:86] 6 kube-system pods found
	I0115 10:44:25.743190   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:25.743196   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:25.743200   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:25.743204   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:25.743210   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:25.743214   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:25.743231   46387 retry.go:31] will retry after 8.584433466s: missing components: kube-apiserver, kube-scheduler
	I0115 10:44:34.335720   46387 system_pods.go:86] 7 kube-system pods found
	I0115 10:44:34.335751   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:34.335759   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:34.335777   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:34.335785   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:34.335793   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:34.335801   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:34.335815   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:34.335834   46387 retry.go:31] will retry after 13.073630932s: missing components: kube-apiserver
	I0115 10:44:47.415277   46387 system_pods.go:86] 8 kube-system pods found
	I0115 10:44:47.415304   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:47.415311   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:47.415318   46387 system_pods.go:89] "kube-apiserver-old-k8s-version-206509" [e708ba3e-5deb-4b60-ab5b-52c4d671fa46] Running
	I0115 10:44:47.415326   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:47.415332   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:47.415339   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:47.415349   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:47.415355   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:47.415371   46387 system_pods.go:126] duration metric: took 57.64651504s to wait for k8s-apps to be running ...
	I0115 10:44:47.415382   46387 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:44:47.415444   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:44:47.433128   46387 system_svc.go:56] duration metric: took 17.740925ms WaitForService to wait for kubelet.
	I0115 10:44:47.433150   46387 kubeadm.go:581] duration metric: took 1m5.827285253s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:44:47.433174   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:44:47.435664   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:44:47.435685   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:44:47.435695   46387 node_conditions.go:105] duration metric: took 2.516113ms to run NodePressure ...
	I0115 10:44:47.435708   46387 start.go:228] waiting for startup goroutines ...
	I0115 10:44:47.435716   46387 start.go:233] waiting for cluster config update ...
	I0115 10:44:47.435728   46387 start.go:242] writing updated cluster config ...
	I0115 10:44:47.436091   46387 ssh_runner.go:195] Run: rm -f paused
	I0115 10:44:47.492053   46387 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0115 10:44:47.494269   46387 out.go:177] 
	W0115 10:44:47.495828   46387 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0115 10:44:47.497453   46387 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0115 10:44:47.498880   46387 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-206509" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 10:37:38 UTC, ends at Mon 2024-01-15 10:53:49 UTC. --
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.244248366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d,PodSandboxId:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315424375402896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{io.kubernetes.container.hash: acb66b98,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175,PodSandboxId:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705315422790243317,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,},Annotations:map[string]string{io.kubernetes.container.hash: 92f84ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e,PodSandboxId:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705315421654264904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 87015047,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94,PodSandboxId:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705315396824646086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f04082c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19,PodSandboxId:f90ec8ef16364825d107b294446779843e6380a2018d5b187d6871a9396156de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705315395215152725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5,PodSandboxId:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705315394616310709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,},Annotations:map[string]string{io.kuberne
tes.container.hash: d2d5a8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37,PodSandboxId:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705315394454545253,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[
string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e79ebccb-4725-4138-ba1c-7fcbf4708933 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.294595108Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8dbfccab-d2ff-41a9-b98f-8ada9063172c name=/runtime.v1.RuntimeService/Version
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.294678976Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8dbfccab-d2ff-41a9-b98f-8ada9063172c name=/runtime.v1.RuntimeService/Version
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.295882651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d38e3b39-1522-4d6b-a93f-6ad8b60d7ba6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.296282102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316029296270239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d38e3b39-1522-4d6b-a93f-6ad8b60d7ba6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.296777813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=881ee7b3-6ffb-43fa-9a73-5f513e272779 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.296844046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=881ee7b3-6ffb-43fa-9a73-5f513e272779 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.296996308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d,PodSandboxId:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315424375402896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{io.kubernetes.container.hash: acb66b98,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175,PodSandboxId:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705315422790243317,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,},Annotations:map[string]string{io.kubernetes.container.hash: 92f84ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e,PodSandboxId:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705315421654264904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 87015047,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94,PodSandboxId:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705315396824646086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f04082c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19,PodSandboxId:f90ec8ef16364825d107b294446779843e6380a2018d5b187d6871a9396156de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705315395215152725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5,PodSandboxId:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705315394616310709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,},Annotations:map[string]string{io.kuberne
tes.container.hash: d2d5a8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37,PodSandboxId:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705315394454545253,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[
string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=881ee7b3-6ffb-43fa-9a73-5f513e272779 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.305145577Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=69c0ba77-d220-4ade-8dd9-744da24933d3 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.305385397Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:67da82ef00f7443b2acdff8b88b4b0e6d03a38f2ec99552552bdf38cabc0162a,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-q46p8,Uid:98c171f1-6607-4831-ba9f-92391ae2c887,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315423994017445,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-q46p8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c171f1-6607-4831-ba9f-92391ae2c887,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:43:43.649899244Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:312f72ca-acf5-4ff0-8444-01001f408d
09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315423889847462,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-15T10:43:42.641260857Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-9k84f,Uid:2c958bfa-7681-48d0-9627-5116a30efc8b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315421067704190,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:43:40.684669067Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&PodSandboxMetadata{Name:kube-proxy-lh96p,Uid:46eabc9f-7177-4a93-ab8
4-a131e78e1f38,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315420831618264,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:43:40.486602663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-206509,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315394038755456,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2024-01-15T10:43:13.634397094Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-206509,Uid:1211e12708de87c59f58e6cccb4974df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315394033774067,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1211e12708de87c59f58e6cccb4974df,kubernetes.io/config.seen: 2024-01-15T10:43:13.639889882Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f90ec8ef16364825d107b2944467
79843e6380a2018d5b187d6871a9396156de,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-206509,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315394023242409,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2024-01-15T10:43:13.636549742Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-206509,Uid:e127aecf07397be5b721df8f3b50ed22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315393969543851,Labels:map[string]string{component: kube-apiserver,io
.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e127aecf07397be5b721df8f3b50ed22,kubernetes.io/config.seen: 2024-01-15T10:43:13.631047648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=69c0ba77-d220-4ade-8dd9-744da24933d3 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.305988866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7a89c331-1d92-4b1c-a582-693f6c136b5a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.306078012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7a89c331-1d92-4b1c-a582-693f6c136b5a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.306278366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d,PodSandboxId:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315424375402896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{io.kubernetes.container.hash: acb66b98,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175,PodSandboxId:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705315422790243317,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,},Annotations:map[string]string{io.kubernetes.container.hash: 92f84ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e,PodSandboxId:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705315421654264904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 87015047,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94,PodSandboxId:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705315396824646086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f04082c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19,PodSandboxId:f90ec8ef16364825d107b294446779843e6380a2018d5b187d6871a9396156de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705315395215152725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5,PodSandboxId:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705315394616310709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,},Annotations:map[string]string{io.kuberne
tes.container.hash: d2d5a8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37,PodSandboxId:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705315394454545253,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[
string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7a89c331-1d92-4b1c-a582-693f6c136b5a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.308059324Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d21fa887-7885-48c4-99a7-a27d3db625ef name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.308355039Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:67da82ef00f7443b2acdff8b88b4b0e6d03a38f2ec99552552bdf38cabc0162a,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-q46p8,Uid:98c171f1-6607-4831-ba9f-92391ae2c887,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315423994017445,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-q46p8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c171f1-6607-4831-ba9f-92391ae2c887,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:43:43.649899244Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:312f72ca-acf5-4ff0-8444-01001f408d
09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315423889847462,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-15T10:43:42.641260857Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-9k84f,Uid:2c958bfa-7681-48d0-9627-5116a30efc8b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315421067704190,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:43:40.684669067Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&PodSandboxMetadata{Name:kube-proxy-lh96p,Uid:46eabc9f-7177-4a93-ab8
4-a131e78e1f38,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315420831618264,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-15T10:43:40.486602663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-206509,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315394038755456,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2024-01-15T10:43:13.634397094Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-206509,Uid:1211e12708de87c59f58e6cccb4974df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315394033774067,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1211e12708de87c59f58e6cccb4974df,kubernetes.io/config.seen: 2024-01-15T10:43:13.639889882Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f90ec8ef16364825d107b2944467
79843e6380a2018d5b187d6871a9396156de,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-206509,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315394023242409,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2024-01-15T10:43:13.636549742Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-206509,Uid:e127aecf07397be5b721df8f3b50ed22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705315393969543851,Labels:map[string]string{component: kube-apiserver,io
.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e127aecf07397be5b721df8f3b50ed22,kubernetes.io/config.seen: 2024-01-15T10:43:13.631047648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=d21fa887-7885-48c4-99a7-a27d3db625ef name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.309085987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a6623195-4a02-4800-b06b-6b83310845b7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.309192595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a6623195-4a02-4800-b06b-6b83310845b7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.309559809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d,PodSandboxId:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315424375402896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{io.kubernetes.container.hash: acb66b98,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175,PodSandboxId:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705315422790243317,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,},Annotations:map[string]string{io.kubernetes.container.hash: 92f84ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e,PodSandboxId:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705315421654264904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 87015047,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94,PodSandboxId:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705315396824646086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f04082c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19,PodSandboxId:f90ec8ef16364825d107b294446779843e6380a2018d5b187d6871a9396156de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705315395215152725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5,PodSandboxId:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705315394616310709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,},Annotations:map[string]string{io.kuberne
tes.container.hash: d2d5a8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37,PodSandboxId:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705315394454545253,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[
string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a6623195-4a02-4800-b06b-6b83310845b7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.341291914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1ac9ec88-f66e-4ab2-91d5-69eeea334bad name=/runtime.v1.RuntimeService/Version
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.341359379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1ac9ec88-f66e-4ab2-91d5-69eeea334bad name=/runtime.v1.RuntimeService/Version
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.342655877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7760e1d3-bec5-4104-96dd-12a16d0aaa10 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.343113529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316029343093728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7760e1d3-bec5-4104-96dd-12a16d0aaa10 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.343914880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9dac9e28-f3a7-4671-8be6-fdef90203e2e name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.343986516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9dac9e28-f3a7-4671-8be6-fdef90203e2e name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:53:49 old-k8s-version-206509 crio[733]: time="2024-01-15 10:53:49.344148372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d,PodSandboxId:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315424375402896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{io.kubernetes.container.hash: acb66b98,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175,PodSandboxId:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705315422790243317,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,},Annotations:map[string]string{io.kubernetes.container.hash: 92f84ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e,PodSandboxId:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705315421654264904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 87015047,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94,PodSandboxId:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705315396824646086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f04082c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19,PodSandboxId:f90ec8ef16364825d107b294446779843e6380a2018d5b187d6871a9396156de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705315395215152725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5,PodSandboxId:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705315394616310709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,},Annotations:map[string]string{io.kuberne
tes.container.hash: d2d5a8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37,PodSandboxId:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705315394454545253,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[
string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9dac9e28-f3a7-4671-8be6-fdef90203e2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	274ec7c48ab7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   6e72267ed7049       storage-provisioner
	4c363f7ffd7bd       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   303d62fb6c36e       kube-proxy-lh96p
	6a694c01d0dbd       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   01a88be5a547c       coredns-5644d7b6d9-9k84f
	49abd2cf9830f       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   f1d772c682010       etcd-old-k8s-version-206509
	6e41dd19c953b       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   f90ec8ef16364       kube-scheduler-old-k8s-version-206509
	48bba9a9313b0       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   31342c0e11eed       kube-apiserver-old-k8s-version-206509
	fd62511730247       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   649e66c4c34b9       kube-controller-manager-old-k8s-version-206509
	
	
	==> coredns [6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e] <==
	.:53
	2024-01-15T10:43:42.442Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2024-01-15T10:43:42.442Z [INFO] CoreDNS-1.6.2
	2024-01-15T10:43:42.442Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2024-01-15T10:44:16.262Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               old-k8s-version-206509
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-206509
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=old-k8s-version-206509
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T10_43_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:43:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:53:21 +0000   Mon, 15 Jan 2024 10:43:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:53:21 +0000   Mon, 15 Jan 2024 10:43:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:53:21 +0000   Mon, 15 Jan 2024 10:43:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:53:21 +0000   Mon, 15 Jan 2024 10:43:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.70
	  Hostname:    old-k8s-version-206509
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 989244633e474b0283881692ca4b18d6
	 System UUID:                98924463-3e47-4b02-8388-1692ca4b18d6
	 Boot ID:                    65965bab-0462-4790-b60f-27d2733e1f9f
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-9k84f                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-206509                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                kube-apiserver-old-k8s-version-206509             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	  kube-system                kube-controller-manager-old-k8s-version-206509    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                kube-proxy-lh96p                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-206509             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                metrics-server-74d5856cc6-q46p8                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-206509     Node old-k8s-version-206509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-206509     Node old-k8s-version-206509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-206509     Node old-k8s-version-206509 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-206509  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan15 10:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068658] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.334851] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.367184] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147498] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.643727] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.883321] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.103317] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.153717] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.112346] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[  +0.203792] systemd-fstab-generator[717]: Ignoring "noauto" for root device
	[Jan15 10:38] systemd-fstab-generator[1039]: Ignoring "noauto" for root device
	[  +0.372683] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +26.294521] kauditd_printk_skb: 18 callbacks suppressed
	[Jan15 10:43] systemd-fstab-generator[3199]: Ignoring "noauto" for root device
	[ +28.392137] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.065809] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94] <==
	2024-01-15 10:43:16.930888 I | raft: 29bd607c3100bf45 became follower at term 0
	2024-01-15 10:43:16.930896 I | raft: newRaft 29bd607c3100bf45 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-15 10:43:16.930900 I | raft: 29bd607c3100bf45 became follower at term 1
	2024-01-15 10:43:16.939098 W | auth: simple token is not cryptographically signed
	2024-01-15 10:43:16.943021 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-15 10:43:16.944342 I | etcdserver: 29bd607c3100bf45 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-15 10:43:16.945172 I | etcdserver/membership: added member 29bd607c3100bf45 [https://192.168.61.70:2380] to cluster c2d50656252384c
	2024-01-15 10:43:16.945633 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-15 10:43:16.945804 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-15 10:43:16.945966 I | embed: listening for metrics on http://192.168.61.70:2381
	2024-01-15 10:43:17.131375 I | raft: 29bd607c3100bf45 is starting a new election at term 1
	2024-01-15 10:43:17.131481 I | raft: 29bd607c3100bf45 became candidate at term 2
	2024-01-15 10:43:17.131495 I | raft: 29bd607c3100bf45 received MsgVoteResp from 29bd607c3100bf45 at term 2
	2024-01-15 10:43:17.131503 I | raft: 29bd607c3100bf45 became leader at term 2
	2024-01-15 10:43:17.131510 I | raft: raft.node: 29bd607c3100bf45 elected leader 29bd607c3100bf45 at term 2
	2024-01-15 10:43:17.131992 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-15 10:43:17.133534 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-15 10:43:17.134117 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-15 10:43:17.134228 I | etcdserver: published {Name:old-k8s-version-206509 ClientURLs:[https://192.168.61.70:2379]} to cluster c2d50656252384c
	2024-01-15 10:43:17.134284 I | embed: ready to serve client requests
	2024-01-15 10:43:17.134678 I | embed: ready to serve client requests
	2024-01-15 10:43:17.135783 I | embed: serving client requests on 192.168.61.70:2379
	2024-01-15 10:43:17.143783 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-15 10:53:17.664084 I | mvcc: store.index: compact 666
	2024-01-15 10:53:17.666390 I | mvcc: finished scheduled compaction at 666 (took 1.672334ms)
	
	
	==> kernel <==
	 10:53:49 up 16 min,  0 users,  load average: 0.15, 0.18, 0.16
	Linux old-k8s-version-206509 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5] <==
	I0115 10:46:44.371252       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:46:44.371375       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:46:44.371512       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:46:44.371526       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:48:22.004241       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:48:22.004390       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:48:22.004534       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:48:22.004543       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:49:22.005004       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:49:22.005224       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:49:22.005317       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:49:22.005339       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:51:22.005729       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:51:22.005899       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:51:22.005965       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:51:22.005972       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:53:22.005641       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:53:22.005798       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:53:22.005988       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:53:22.006002       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37] <==
	E0115 10:47:42.926871       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:47:56.936963       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:48:13.179157       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:48:28.939208       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:48:43.431591       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:49:00.941996       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:49:13.684046       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:49:32.944620       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:49:43.935911       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:50:04.947243       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:50:14.188334       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:50:36.949378       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:50:44.440001       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:51:08.951284       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:51:14.692507       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:51:40.953581       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:51:44.944589       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:52:12.955686       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:52:15.197023       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:52:44.957614       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:52:45.448994       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0115 10:53:15.701295       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:53:16.960200       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:53:45.953126       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:53:48.962846       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175] <==
	W0115 10:43:43.146772       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0115 10:43:43.156796       1 node.go:135] Successfully retrieved node IP: 192.168.61.70
	I0115 10:43:43.156910       1 server_others.go:149] Using iptables Proxier.
	I0115 10:43:43.157766       1 server.go:529] Version: v1.16.0
	I0115 10:43:43.166401       1 config.go:313] Starting service config controller
	I0115 10:43:43.166832       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0115 10:43:43.166955       1 config.go:131] Starting endpoints config controller
	I0115 10:43:43.166980       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0115 10:43:43.269537       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0115 10:43:43.269789       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19] <==
	W0115 10:43:20.997270       1 authentication.go:79] Authentication is disabled
	I0115 10:43:20.997293       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0115 10:43:21.002178       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0115 10:43:21.036605       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 10:43:21.056409       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 10:43:21.063134       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 10:43:21.063558       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 10:43:21.063598       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 10:43:21.064083       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 10:43:21.064113       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 10:43:21.064144       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 10:43:21.064186       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 10:43:21.066928       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 10:43:21.067679       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 10:43:22.055794       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 10:43:22.057579       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 10:43:22.065582       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 10:43:22.067732       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 10:43:22.069264       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 10:43:22.069896       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 10:43:22.071885       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 10:43:22.073165       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 10:43:22.074296       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 10:43:22.076283       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 10:43:22.078563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 10:37:38 UTC, ends at Mon 2024-01-15 10:53:49 UTC. --
	Jan 15 10:49:22 old-k8s-version-206509 kubelet[3205]: E0115 10:49:22.349052    3205 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 10:49:22 old-k8s-version-206509 kubelet[3205]: E0115 10:49:22.349272    3205 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 10:49:22 old-k8s-version-206509 kubelet[3205]: E0115 10:49:22.349350    3205 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 10:49:22 old-k8s-version-206509 kubelet[3205]: E0115 10:49:22.349388    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 15 10:49:34 old-k8s-version-206509 kubelet[3205]: E0115 10:49:34.302503    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:49:48 old-k8s-version-206509 kubelet[3205]: E0115 10:49:48.302317    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:50:00 old-k8s-version-206509 kubelet[3205]: E0115 10:50:00.303041    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:50:15 old-k8s-version-206509 kubelet[3205]: E0115 10:50:15.302826    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:50:27 old-k8s-version-206509 kubelet[3205]: E0115 10:50:27.302212    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:50:40 old-k8s-version-206509 kubelet[3205]: E0115 10:50:40.302248    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:50:51 old-k8s-version-206509 kubelet[3205]: E0115 10:50:51.302576    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:51:05 old-k8s-version-206509 kubelet[3205]: E0115 10:51:05.302332    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:51:18 old-k8s-version-206509 kubelet[3205]: E0115 10:51:18.302697    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:51:33 old-k8s-version-206509 kubelet[3205]: E0115 10:51:33.302244    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:51:48 old-k8s-version-206509 kubelet[3205]: E0115 10:51:48.301964    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:52:01 old-k8s-version-206509 kubelet[3205]: E0115 10:52:01.302782    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:52:14 old-k8s-version-206509 kubelet[3205]: E0115 10:52:14.302577    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:52:27 old-k8s-version-206509 kubelet[3205]: E0115 10:52:27.302506    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:52:38 old-k8s-version-206509 kubelet[3205]: E0115 10:52:38.302365    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:52:50 old-k8s-version-206509 kubelet[3205]: E0115 10:52:50.302644    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:53:01 old-k8s-version-206509 kubelet[3205]: E0115 10:53:01.302130    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:53:12 old-k8s-version-206509 kubelet[3205]: E0115 10:53:12.302112    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:53:13 old-k8s-version-206509 kubelet[3205]: E0115 10:53:13.383288    3205 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 15 10:53:26 old-k8s-version-206509 kubelet[3205]: E0115 10:53:26.302257    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:53:38 old-k8s-version-206509 kubelet[3205]: E0115 10:53:38.302356    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d] <==
	I0115 10:43:44.505234       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 10:43:44.514853       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 10:43:44.515108       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 10:43:44.524666       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 10:43:44.525814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-206509_0ce01bed-4171-4129-83aa-61a84703e5fc!
	I0115 10:43:44.527318       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8072bbe3-0aed-4777-89c1-3b997a5a8d93", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-206509_0ce01bed-4171-4129-83aa-61a84703e5fc became leader
	I0115 10:43:44.626506       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-206509_0ce01bed-4171-4129-83aa-61a84703e5fc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-206509 -n old-k8s-version-206509
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-206509 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-q46p8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-206509 describe pod metrics-server-74d5856cc6-q46p8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-206509 describe pod metrics-server-74d5856cc6-q46p8: exit status 1 (70.328644ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-q46p8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-206509 describe pod metrics-server-74d5856cc6-q46p8: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (382.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-781270 -n embed-certs-781270
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-15 10:58:32.862856063 +0000 UTC m=+5524.834023477
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-781270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-781270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.446µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-781270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-781270 -n embed-certs-781270
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-781270 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-781270 logs -n 25: (1.341370619s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-781270            | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-802186 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | disable-driver-mounts-802186                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:32 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-709012  | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-206509             | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-824502                  | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-781270                 | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:33 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-709012       | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC | 15 Jan 24 10:43 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:56 UTC | 15 Jan 24 10:56 UTC |
	| start   | -p newest-cni-273069 --memory=2200 --alsologtostderr   | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:56 UTC | 15 Jan 24 10:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-273069             | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC | 15 Jan 24 10:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-273069                                   | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC | 15 Jan 24 10:57 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-273069                  | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC | 15 Jan 24 10:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-273069 --memory=2200 --alsologtostderr   | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:58 UTC | 15 Jan 24 10:58 UTC |
	| start   | -p auto-453827 --memory=3072                           | auto-453827                  | jenkins | v1.32.0 | 15 Jan 24 10:58 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:58:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:58:10.848786   52411 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:58:10.848880   52411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:58:10.848889   52411 out.go:309] Setting ErrFile to fd 2...
	I0115 10:58:10.848894   52411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:58:10.849108   52411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:58:10.849632   52411 out.go:303] Setting JSON to false
	I0115 10:58:10.850733   52411 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5991,"bootTime":1705310300,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 10:58:10.850796   52411 start.go:138] virtualization: kvm guest
	I0115 10:58:10.853361   52411 out.go:177] * [auto-453827] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 10:58:10.854958   52411 notify.go:220] Checking for updates...
	I0115 10:58:10.854996   52411 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 10:58:10.856502   52411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:58:10.857878   52411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:58:10.859251   52411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:58:10.860496   52411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 10:58:10.862140   52411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 10:58:10.863887   52411 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:58:10.863984   52411 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:58:10.864135   52411 config.go:182] Loaded profile config "newest-cni-273069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:58:10.864236   52411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:58:10.901623   52411 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 10:58:10.903159   52411 start.go:298] selected driver: kvm2
	I0115 10:58:10.903175   52411 start.go:902] validating driver "kvm2" against <nil>
	I0115 10:58:10.903190   52411 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 10:58:10.904118   52411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:58:10.904218   52411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 10:58:10.920529   52411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 10:58:10.920564   52411 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 10:58:10.920746   52411 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 10:58:10.920798   52411 cni.go:84] Creating CNI manager for ""
	I0115 10:58:10.920810   52411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:58:10.920822   52411 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 10:58:10.920830   52411 start_flags.go:321] config:
	{Name:auto-453827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-453827 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:58:10.920978   52411 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:58:10.923608   52411 out.go:177] * Starting control plane node auto-453827 in cluster auto-453827
	I0115 10:58:10.924948   52411 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:58:10.924993   52411 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 10:58:10.925004   52411 cache.go:56] Caching tarball of preloaded images
	I0115 10:58:10.925096   52411 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 10:58:10.925117   52411 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:58:10.925206   52411 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/auto-453827/config.json ...
	I0115 10:58:10.925223   52411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/auto-453827/config.json: {Name:mk6f04e5c2886d5f92fcb161fabd87f75aac46b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:58:10.925361   52411 start.go:365] acquiring machines lock for auto-453827: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:58:11.403209   52411 start.go:369] acquired machines lock for "auto-453827" in 477.821719ms
	I0115 10:58:11.403285   52411 start.go:93] Provisioning new machine with config: &{Name:auto-453827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:auto-453827 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:58:11.403386   52411 start.go:125] createHost starting for "" (driver="kvm2")
	I0115 10:58:10.530685   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:10.531099   52070 main.go:141] libmachine: (newest-cni-273069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:87:c9", ip: ""} in network mk-newest-cni-273069: {Iface:virbr4 ExpiryTime:2024-01-15 11:58:03 +0000 UTC Type:0 Mac:52:54:00:8b:87:c9 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:newest-cni-273069 Clientid:01:52:54:00:8b:87:c9}
	I0115 10:58:10.531143   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined IP address 192.168.61.238 and MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:10.531276   52070 provision.go:138] copyHostCerts
	I0115 10:58:10.531343   52070 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:58:10.531359   52070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:58:10.531414   52070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:58:10.531527   52070 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:58:10.531538   52070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:58:10.531575   52070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:58:10.531668   52070 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:58:10.531678   52070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:58:10.531717   52070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:58:10.531776   52070 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.newest-cni-273069 san=[192.168.61.238 192.168.61.238 localhost 127.0.0.1 minikube newest-cni-273069]
	I0115 10:58:10.654753   52070 provision.go:172] copyRemoteCerts
	I0115 10:58:10.654807   52070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:58:10.654828   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHHostname
	I0115 10:58:10.657441   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:10.657835   52070 main.go:141] libmachine: (newest-cni-273069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:87:c9", ip: ""} in network mk-newest-cni-273069: {Iface:virbr4 ExpiryTime:2024-01-15 11:58:03 +0000 UTC Type:0 Mac:52:54:00:8b:87:c9 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:newest-cni-273069 Clientid:01:52:54:00:8b:87:c9}
	I0115 10:58:10.657891   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined IP address 192.168.61.238 and MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:10.658084   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHPort
	I0115 10:58:10.658297   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHKeyPath
	I0115 10:58:10.658489   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHUsername
	I0115 10:58:10.658669   52070 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/newest-cni-273069/id_rsa Username:docker}
	I0115 10:58:10.747978   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:58:10.770797   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:58:10.793411   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 10:58:10.819925   52070 provision.go:86] duration metric: configureAuth took 720.205856ms
	I0115 10:58:10.819946   52070 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:58:10.820102   52070 config.go:182] Loaded profile config "newest-cni-273069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:58:10.820170   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHHostname
	I0115 10:58:10.823312   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:10.823720   52070 main.go:141] libmachine: (newest-cni-273069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:87:c9", ip: ""} in network mk-newest-cni-273069: {Iface:virbr4 ExpiryTime:2024-01-15 11:58:03 +0000 UTC Type:0 Mac:52:54:00:8b:87:c9 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:newest-cni-273069 Clientid:01:52:54:00:8b:87:c9}
	I0115 10:58:10.823761   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined IP address 192.168.61.238 and MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:10.823918   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHPort
	I0115 10:58:10.824119   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHKeyPath
	I0115 10:58:10.824307   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHKeyPath
	I0115 10:58:10.824517   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHUsername
	I0115 10:58:10.824698   52070 main.go:141] libmachine: Using SSH client type: native
	I0115 10:58:10.825103   52070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0115 10:58:10.825134   52070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:58:11.140436   52070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:58:11.140459   52070 machine.go:91] provisioned docker machine in 1.328555921s
	I0115 10:58:11.140468   52070 start.go:300] post-start starting for "newest-cni-273069" (driver="kvm2")
	I0115 10:58:11.140481   52070 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:58:11.140495   52070 main.go:141] libmachine: (newest-cni-273069) Calling .DriverName
	I0115 10:58:11.140779   52070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:58:11.140806   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHHostname
	I0115 10:58:11.143783   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.144186   52070 main.go:141] libmachine: (newest-cni-273069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:87:c9", ip: ""} in network mk-newest-cni-273069: {Iface:virbr4 ExpiryTime:2024-01-15 11:58:03 +0000 UTC Type:0 Mac:52:54:00:8b:87:c9 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:newest-cni-273069 Clientid:01:52:54:00:8b:87:c9}
	I0115 10:58:11.144213   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined IP address 192.168.61.238 and MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.144424   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHPort
	I0115 10:58:11.144617   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHKeyPath
	I0115 10:58:11.144782   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHUsername
	I0115 10:58:11.144932   52070 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/newest-cni-273069/id_rsa Username:docker}
	I0115 10:58:11.238554   52070 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:58:11.242779   52070 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:58:11.242816   52070 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:58:11.242902   52070 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:58:11.243014   52070 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:58:11.243173   52070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:58:11.253751   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:58:11.275677   52070 start.go:303] post-start completed in 135.195306ms
	I0115 10:58:11.275707   52070 fix.go:56] fixHost completed within 20.944460776s
	I0115 10:58:11.275732   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHHostname
	I0115 10:58:11.278964   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.279515   52070 main.go:141] libmachine: (newest-cni-273069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:87:c9", ip: ""} in network mk-newest-cni-273069: {Iface:virbr4 ExpiryTime:2024-01-15 11:58:03 +0000 UTC Type:0 Mac:52:54:00:8b:87:c9 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:newest-cni-273069 Clientid:01:52:54:00:8b:87:c9}
	I0115 10:58:11.279544   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined IP address 192.168.61.238 and MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.279693   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHPort
	I0115 10:58:11.279889   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHKeyPath
	I0115 10:58:11.280063   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHKeyPath
	I0115 10:58:11.280243   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHUsername
	I0115 10:58:11.280432   52070 main.go:141] libmachine: Using SSH client type: native
	I0115 10:58:11.280732   52070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0115 10:58:11.280743   52070 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:58:11.403096   52070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705316291.346014276
	
	I0115 10:58:11.403116   52070 fix.go:206] guest clock: 1705316291.346014276
	I0115 10:58:11.403123   52070 fix.go:219] Guest: 2024-01-15 10:58:11.346014276 +0000 UTC Remote: 2024-01-15 10:58:11.275711349 +0000 UTC m=+21.100398241 (delta=70.302927ms)
	I0115 10:58:11.403139   52070 fix.go:190] guest clock delta is within tolerance: 70.302927ms
	I0115 10:58:11.403144   52070 start.go:83] releasing machines lock for "newest-cni-273069", held for 21.071912499s
	I0115 10:58:11.403164   52070 main.go:141] libmachine: (newest-cni-273069) Calling .DriverName
	I0115 10:58:11.403416   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetIP
	I0115 10:58:11.406681   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.407115   52070 main.go:141] libmachine: (newest-cni-273069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:87:c9", ip: ""} in network mk-newest-cni-273069: {Iface:virbr4 ExpiryTime:2024-01-15 11:58:03 +0000 UTC Type:0 Mac:52:54:00:8b:87:c9 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:newest-cni-273069 Clientid:01:52:54:00:8b:87:c9}
	I0115 10:58:11.407150   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined IP address 192.168.61.238 and MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.407350   52070 main.go:141] libmachine: (newest-cni-273069) Calling .DriverName
	I0115 10:58:11.407884   52070 main.go:141] libmachine: (newest-cni-273069) Calling .DriverName
	I0115 10:58:11.408102   52070 main.go:141] libmachine: (newest-cni-273069) Calling .DriverName
	I0115 10:58:11.408233   52070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:58:11.408274   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHHostname
	I0115 10:58:11.408329   52070 ssh_runner.go:195] Run: cat /version.json
	I0115 10:58:11.408355   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHHostname
	I0115 10:58:11.411452   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.411651   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.411919   52070 main.go:141] libmachine: (newest-cni-273069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:87:c9", ip: ""} in network mk-newest-cni-273069: {Iface:virbr4 ExpiryTime:2024-01-15 11:58:03 +0000 UTC Type:0 Mac:52:54:00:8b:87:c9 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:newest-cni-273069 Clientid:01:52:54:00:8b:87:c9}
	I0115 10:58:11.411943   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined IP address 192.168.61.238 and MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.412105   52070 main.go:141] libmachine: (newest-cni-273069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:87:c9", ip: ""} in network mk-newest-cni-273069: {Iface:virbr4 ExpiryTime:2024-01-15 11:58:03 +0000 UTC Type:0 Mac:52:54:00:8b:87:c9 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:newest-cni-273069 Clientid:01:52:54:00:8b:87:c9}
	I0115 10:58:11.412149   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHPort
	I0115 10:58:11.412163   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined IP address 192.168.61.238 and MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:11.412312   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHPort
	I0115 10:58:11.412325   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHKeyPath
	I0115 10:58:11.412553   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHKeyPath
	I0115 10:58:11.412565   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHUsername
	I0115 10:58:11.412697   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetSSHUsername
	I0115 10:58:11.412707   52070 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/newest-cni-273069/id_rsa Username:docker}
	I0115 10:58:11.412846   52070 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/newest-cni-273069/id_rsa Username:docker}
	I0115 10:58:11.541436   52070 ssh_runner.go:195] Run: systemctl --version
	I0115 10:58:11.548981   52070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:58:11.694847   52070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:58:11.700875   52070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:58:11.700952   52070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:58:11.716681   52070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:58:11.716706   52070 start.go:475] detecting cgroup driver to use...
	I0115 10:58:11.716776   52070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:58:11.731179   52070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:58:11.744240   52070 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:58:11.744288   52070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:58:11.758263   52070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:58:11.772606   52070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:58:11.896025   52070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:58:12.032766   52070 docker.go:233] disabling docker service ...
	I0115 10:58:12.032846   52070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:58:12.048704   52070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:58:12.062313   52070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:58:12.194707   52070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:58:12.312842   52070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:58:12.329155   52070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:58:12.349149   52070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:58:12.349214   52070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:58:12.359110   52070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:58:12.359194   52070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:58:12.369609   52070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:58:12.379300   52070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:58:12.388719   52070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:58:12.398601   52070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:58:12.407235   52070 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:58:12.407291   52070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:58:12.421057   52070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:58:12.429622   52070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:58:12.564977   52070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:58:12.742513   52070 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:58:12.742581   52070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:58:12.749127   52070 start.go:543] Will wait 60s for crictl version
	I0115 10:58:12.749181   52070 ssh_runner.go:195] Run: which crictl
	I0115 10:58:12.752788   52070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:58:12.789341   52070 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:58:12.789430   52070 ssh_runner.go:195] Run: crio --version
	I0115 10:58:12.839955   52070 ssh_runner.go:195] Run: crio --version
	I0115 10:58:12.890606   52070 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0115 10:58:12.892330   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetIP
	I0115 10:58:12.895528   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:12.896010   52070 main.go:141] libmachine: (newest-cni-273069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:87:c9", ip: ""} in network mk-newest-cni-273069: {Iface:virbr4 ExpiryTime:2024-01-15 11:58:03 +0000 UTC Type:0 Mac:52:54:00:8b:87:c9 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:newest-cni-273069 Clientid:01:52:54:00:8b:87:c9}
	I0115 10:58:12.896040   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined IP address 192.168.61.238 and MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:12.896232   52070 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0115 10:58:12.900925   52070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:58:12.916709   52070 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0115 10:58:12.918281   52070 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 10:58:12.918366   52070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:58:12.970331   52070 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0115 10:58:12.970411   52070 ssh_runner.go:195] Run: which lz4
	I0115 10:58:12.974896   52070 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:58:12.979013   52070 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:58:12.979041   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0115 10:58:14.630962   52070 crio.go:444] Took 1.656126 seconds to copy over tarball
	I0115 10:58:14.631030   52070 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:58:11.405642   52411 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0115 10:58:11.405826   52411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:58:11.405879   52411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:58:11.422508   52411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0115 10:58:11.423023   52411 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:58:11.423658   52411 main.go:141] libmachine: Using API Version  1
	I0115 10:58:11.423691   52411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:58:11.424058   52411 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:58:11.424257   52411 main.go:141] libmachine: (auto-453827) Calling .GetMachineName
	I0115 10:58:11.424524   52411 main.go:141] libmachine: (auto-453827) Calling .DriverName
	I0115 10:58:11.424704   52411 start.go:159] libmachine.API.Create for "auto-453827" (driver="kvm2")
	I0115 10:58:11.424742   52411 client.go:168] LocalClient.Create starting
	I0115 10:58:11.424773   52411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem
	I0115 10:58:11.424807   52411 main.go:141] libmachine: Decoding PEM data...
	I0115 10:58:11.424824   52411 main.go:141] libmachine: Parsing certificate...
	I0115 10:58:11.424875   52411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem
	I0115 10:58:11.424892   52411 main.go:141] libmachine: Decoding PEM data...
	I0115 10:58:11.424907   52411 main.go:141] libmachine: Parsing certificate...
	I0115 10:58:11.424920   52411 main.go:141] libmachine: Running pre-create checks...
	I0115 10:58:11.424930   52411 main.go:141] libmachine: (auto-453827) Calling .PreCreateCheck
	I0115 10:58:11.425350   52411 main.go:141] libmachine: (auto-453827) Calling .GetConfigRaw
	I0115 10:58:11.425781   52411 main.go:141] libmachine: Creating machine...
	I0115 10:58:11.425801   52411 main.go:141] libmachine: (auto-453827) Calling .Create
	I0115 10:58:11.425929   52411 main.go:141] libmachine: (auto-453827) Creating KVM machine...
	I0115 10:58:11.427054   52411 main.go:141] libmachine: (auto-453827) DBG | found existing default KVM network
	I0115 10:58:11.428212   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:11.428064   52435 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:96:06:12} reservation:<nil>}
	I0115 10:58:11.429273   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:11.429174   52435 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000280870}
	I0115 10:58:11.434502   52411 main.go:141] libmachine: (auto-453827) DBG | trying to create private KVM network mk-auto-453827 192.168.50.0/24...
	I0115 10:58:11.511592   52411 main.go:141] libmachine: (auto-453827) DBG | private KVM network mk-auto-453827 192.168.50.0/24 created
	I0115 10:58:11.511691   52411 main.go:141] libmachine: (auto-453827) Setting up store path in /home/jenkins/minikube-integration/17953-4821/.minikube/machines/auto-453827 ...
	I0115 10:58:11.511715   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:11.511505   52435 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:58:11.511739   52411 main.go:141] libmachine: (auto-453827) Building disk image from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 10:58:11.511789   52411 main.go:141] libmachine: (auto-453827) Downloading /home/jenkins/minikube-integration/17953-4821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 10:58:11.714133   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:11.714003   52435 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/auto-453827/id_rsa...
	I0115 10:58:11.858400   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:11.858270   52435 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/auto-453827/auto-453827.rawdisk...
	I0115 10:58:11.858465   52411 main.go:141] libmachine: (auto-453827) DBG | Writing magic tar header
	I0115 10:58:11.858483   52411 main.go:141] libmachine: (auto-453827) DBG | Writing SSH key tar header
	I0115 10:58:11.858496   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:11.858441   52435 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/auto-453827 ...
	I0115 10:58:11.858609   52411 main.go:141] libmachine: (auto-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/auto-453827
	I0115 10:58:11.858634   52411 main.go:141] libmachine: (auto-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines
	I0115 10:58:11.858655   52411 main.go:141] libmachine: (auto-453827) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/auto-453827 (perms=drwx------)
	I0115 10:58:11.858672   52411 main.go:141] libmachine: (auto-453827) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines (perms=drwxr-xr-x)
	I0115 10:58:11.858683   52411 main.go:141] libmachine: (auto-453827) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube (perms=drwxr-xr-x)
	I0115 10:58:11.858701   52411 main.go:141] libmachine: (auto-453827) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821 (perms=drwxrwxr-x)
	I0115 10:58:11.858714   52411 main.go:141] libmachine: (auto-453827) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 10:58:11.858734   52411 main.go:141] libmachine: (auto-453827) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 10:58:11.858750   52411 main.go:141] libmachine: (auto-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:58:11.858767   52411 main.go:141] libmachine: (auto-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821
	I0115 10:58:11.858782   52411 main.go:141] libmachine: (auto-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 10:58:11.858795   52411 main.go:141] libmachine: (auto-453827) Creating domain...
	I0115 10:58:11.858809   52411 main.go:141] libmachine: (auto-453827) DBG | Checking permissions on dir: /home/jenkins
	I0115 10:58:11.858826   52411 main.go:141] libmachine: (auto-453827) DBG | Checking permissions on dir: /home
	I0115 10:58:11.858837   52411 main.go:141] libmachine: (auto-453827) DBG | Skipping /home - not owner
	I0115 10:58:11.860008   52411 main.go:141] libmachine: (auto-453827) define libvirt domain using xml: 
	I0115 10:58:11.860025   52411 main.go:141] libmachine: (auto-453827) <domain type='kvm'>
	I0115 10:58:11.860032   52411 main.go:141] libmachine: (auto-453827)   <name>auto-453827</name>
	I0115 10:58:11.860041   52411 main.go:141] libmachine: (auto-453827)   <memory unit='MiB'>3072</memory>
	I0115 10:58:11.860049   52411 main.go:141] libmachine: (auto-453827)   <vcpu>2</vcpu>
	I0115 10:58:11.860062   52411 main.go:141] libmachine: (auto-453827)   <features>
	I0115 10:58:11.860076   52411 main.go:141] libmachine: (auto-453827)     <acpi/>
	I0115 10:58:11.860093   52411 main.go:141] libmachine: (auto-453827)     <apic/>
	I0115 10:58:11.860121   52411 main.go:141] libmachine: (auto-453827)     <pae/>
	I0115 10:58:11.860146   52411 main.go:141] libmachine: (auto-453827)     
	I0115 10:58:11.860170   52411 main.go:141] libmachine: (auto-453827)   </features>
	I0115 10:58:11.860179   52411 main.go:141] libmachine: (auto-453827)   <cpu mode='host-passthrough'>
	I0115 10:58:11.860189   52411 main.go:141] libmachine: (auto-453827)   
	I0115 10:58:11.860196   52411 main.go:141] libmachine: (auto-453827)   </cpu>
	I0115 10:58:11.860214   52411 main.go:141] libmachine: (auto-453827)   <os>
	I0115 10:58:11.860233   52411 main.go:141] libmachine: (auto-453827)     <type>hvm</type>
	I0115 10:58:11.860247   52411 main.go:141] libmachine: (auto-453827)     <boot dev='cdrom'/>
	I0115 10:58:11.860257   52411 main.go:141] libmachine: (auto-453827)     <boot dev='hd'/>
	I0115 10:58:11.860271   52411 main.go:141] libmachine: (auto-453827)     <bootmenu enable='no'/>
	I0115 10:58:11.860279   52411 main.go:141] libmachine: (auto-453827)   </os>
	I0115 10:58:11.860293   52411 main.go:141] libmachine: (auto-453827)   <devices>
	I0115 10:58:11.860304   52411 main.go:141] libmachine: (auto-453827)     <disk type='file' device='cdrom'>
	I0115 10:58:11.860318   52411 main.go:141] libmachine: (auto-453827)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/auto-453827/boot2docker.iso'/>
	I0115 10:58:11.860330   52411 main.go:141] libmachine: (auto-453827)       <target dev='hdc' bus='scsi'/>
	I0115 10:58:11.860340   52411 main.go:141] libmachine: (auto-453827)       <readonly/>
	I0115 10:58:11.860355   52411 main.go:141] libmachine: (auto-453827)     </disk>
	I0115 10:58:11.860371   52411 main.go:141] libmachine: (auto-453827)     <disk type='file' device='disk'>
	I0115 10:58:11.860385   52411 main.go:141] libmachine: (auto-453827)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 10:58:11.860402   52411 main.go:141] libmachine: (auto-453827)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/auto-453827/auto-453827.rawdisk'/>
	I0115 10:58:11.860411   52411 main.go:141] libmachine: (auto-453827)       <target dev='hda' bus='virtio'/>
	I0115 10:58:11.860424   52411 main.go:141] libmachine: (auto-453827)     </disk>
	I0115 10:58:11.860440   52411 main.go:141] libmachine: (auto-453827)     <interface type='network'>
	I0115 10:58:11.860454   52411 main.go:141] libmachine: (auto-453827)       <source network='mk-auto-453827'/>
	I0115 10:58:11.860467   52411 main.go:141] libmachine: (auto-453827)       <model type='virtio'/>
	I0115 10:58:11.860476   52411 main.go:141] libmachine: (auto-453827)     </interface>
	I0115 10:58:11.860488   52411 main.go:141] libmachine: (auto-453827)     <interface type='network'>
	I0115 10:58:11.860499   52411 main.go:141] libmachine: (auto-453827)       <source network='default'/>
	I0115 10:58:11.860514   52411 main.go:141] libmachine: (auto-453827)       <model type='virtio'/>
	I0115 10:58:11.860528   52411 main.go:141] libmachine: (auto-453827)     </interface>
	I0115 10:58:11.860538   52411 main.go:141] libmachine: (auto-453827)     <serial type='pty'>
	I0115 10:58:11.860551   52411 main.go:141] libmachine: (auto-453827)       <target port='0'/>
	I0115 10:58:11.860561   52411 main.go:141] libmachine: (auto-453827)     </serial>
	I0115 10:58:11.860575   52411 main.go:141] libmachine: (auto-453827)     <console type='pty'>
	I0115 10:58:11.860595   52411 main.go:141] libmachine: (auto-453827)       <target type='serial' port='0'/>
	I0115 10:58:11.860608   52411 main.go:141] libmachine: (auto-453827)     </console>
	I0115 10:58:11.860620   52411 main.go:141] libmachine: (auto-453827)     <rng model='virtio'>
	I0115 10:58:11.860634   52411 main.go:141] libmachine: (auto-453827)       <backend model='random'>/dev/random</backend>
	I0115 10:58:11.860646   52411 main.go:141] libmachine: (auto-453827)     </rng>
	I0115 10:58:11.860655   52411 main.go:141] libmachine: (auto-453827)     
	I0115 10:58:11.860669   52411 main.go:141] libmachine: (auto-453827)     
	I0115 10:58:11.860688   52411 main.go:141] libmachine: (auto-453827)   </devices>
	I0115 10:58:11.860699   52411 main.go:141] libmachine: (auto-453827) </domain>
	I0115 10:58:11.860710   52411 main.go:141] libmachine: (auto-453827) 
	I0115 10:58:11.865077   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:31:ef:c7 in network default
	I0115 10:58:11.865699   52411 main.go:141] libmachine: (auto-453827) Ensuring networks are active...
	I0115 10:58:11.865725   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:11.866381   52411 main.go:141] libmachine: (auto-453827) Ensuring network default is active
	I0115 10:58:11.866728   52411 main.go:141] libmachine: (auto-453827) Ensuring network mk-auto-453827 is active
	I0115 10:58:11.867171   52411 main.go:141] libmachine: (auto-453827) Getting domain xml...
	I0115 10:58:11.867825   52411 main.go:141] libmachine: (auto-453827) Creating domain...
	I0115 10:58:13.177283   52411 main.go:141] libmachine: (auto-453827) Waiting to get IP...
	I0115 10:58:13.178108   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:13.178687   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:13.178736   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:13.178665   52435 retry.go:31] will retry after 213.480972ms: waiting for machine to come up
	I0115 10:58:13.394444   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:13.395080   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:13.395115   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:13.394997   52435 retry.go:31] will retry after 281.58835ms: waiting for machine to come up
	I0115 10:58:13.678756   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:13.679332   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:13.679363   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:13.679276   52435 retry.go:31] will retry after 333.828992ms: waiting for machine to come up
	I0115 10:58:14.015078   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:14.015510   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:14.015542   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:14.015481   52435 retry.go:31] will retry after 428.254124ms: waiting for machine to come up
	I0115 10:58:14.445112   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:14.445580   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:14.445599   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:14.445516   52435 retry.go:31] will retry after 478.629302ms: waiting for machine to come up
	I0115 10:58:14.926517   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:14.927108   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:14.927163   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:14.927050   52435 retry.go:31] will retry after 891.96413ms: waiting for machine to come up
	I0115 10:58:15.820668   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:15.821204   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:15.821236   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:15.821150   52435 retry.go:31] will retry after 880.882839ms: waiting for machine to come up
	I0115 10:58:17.565105   52070 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.934047796s)
	I0115 10:58:17.565133   52070 crio.go:451] Took 2.934149 seconds to extract the tarball
	I0115 10:58:17.565145   52070 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:58:17.602234   52070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:58:17.653066   52070 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:58:17.653087   52070 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:58:17.653145   52070 ssh_runner.go:195] Run: crio config
	I0115 10:58:17.719586   52070 cni.go:84] Creating CNI manager for ""
	I0115 10:58:17.719614   52070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:58:17.719637   52070 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0115 10:58:17.719669   52070 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.238 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-273069 NodeName:newest-cni-273069 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:58:17.719841   52070 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-273069"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:58:17.719934   52070 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-273069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-273069 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:58:17.720000   52070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0115 10:58:17.730216   52070 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:58:17.730297   52070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:58:17.739207   52070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0115 10:58:17.756409   52070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0115 10:58:17.772624   52070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0115 10:58:17.789437   52070 ssh_runner.go:195] Run: grep 192.168.61.238	control-plane.minikube.internal$ /etc/hosts
	I0115 10:58:17.793667   52070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:58:17.807057   52070 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/newest-cni-273069 for IP: 192.168.61.238
	I0115 10:58:17.807081   52070 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:58:17.807242   52070 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:58:17.807295   52070 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:58:17.807386   52070 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/newest-cni-273069/client.key
	I0115 10:58:17.807476   52070 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/newest-cni-273069/apiserver.key.761b3e7f
	I0115 10:58:17.807539   52070 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/newest-cni-273069/proxy-client.key
	I0115 10:58:17.807647   52070 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:58:17.807677   52070 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:58:17.807687   52070 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:58:17.807718   52070 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:58:17.807755   52070 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:58:17.807782   52070 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:58:17.807840   52070 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:58:17.808650   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/newest-cni-273069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:58:17.832855   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/newest-cni-273069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 10:58:17.857298   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/newest-cni-273069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:58:17.880776   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/newest-cni-273069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:58:17.905610   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:58:17.930502   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:58:17.956265   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:58:17.984142   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:58:18.009287   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:58:18.032369   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:58:18.055681   52070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:58:18.078873   52070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:58:18.095445   52070 ssh_runner.go:195] Run: openssl version
	I0115 10:58:18.101126   52070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:58:18.112409   52070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:58:18.117359   52070 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:58:18.117434   52070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:58:18.124716   52070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:58:18.138368   52070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:58:18.149692   52070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:58:18.154451   52070 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:58:18.154498   52070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:58:18.160026   52070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:58:18.170936   52070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:58:18.181815   52070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:58:18.186864   52070 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:58:18.186916   52070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:58:18.192825   52070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:58:18.204401   52070 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:58:18.209407   52070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:58:18.215523   52070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:58:18.221747   52070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:58:18.227928   52070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:58:18.234511   52070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:58:18.241055   52070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:58:18.247544   52070 kubeadm.go:404] StartCluster: {Name:newest-cni-273069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-273069 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false syste
m_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:58:18.247629   52070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:58:18.247678   52070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:58:18.290781   52070 cri.go:89] found id: ""
	I0115 10:58:18.290853   52070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:58:18.303223   52070 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:58:18.303248   52070 kubeadm.go:636] restartCluster start
	I0115 10:58:18.303302   52070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:58:18.315011   52070 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:18.315855   52070 kubeconfig.go:135] verify returned: extract IP: "newest-cni-273069" does not appear in /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:58:18.316540   52070 kubeconfig.go:146] "newest-cni-273069" context is missing from /home/jenkins/minikube-integration/17953-4821/kubeconfig - will repair!
	I0115 10:58:18.317408   52070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:58:18.433214   52070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:58:18.444718   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:18.444768   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:18.458690   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:18.945218   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:18.945294   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:18.960668   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:19.445219   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:19.445286   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:19.458539   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:19.944936   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:19.945031   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:19.959587   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:16.703649   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:16.704059   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:16.704103   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:16.704024   52435 retry.go:31] will retry after 1.338908377s: waiting for machine to come up
	I0115 10:58:18.044144   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:18.044669   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:18.044697   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:18.044624   52435 retry.go:31] will retry after 1.602835361s: waiting for machine to come up
	I0115 10:58:19.649517   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:19.650048   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:19.650080   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:19.649995   52435 retry.go:31] will retry after 1.500964268s: waiting for machine to come up
	I0115 10:58:20.445088   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:20.445165   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:20.457496   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:20.944812   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:20.944901   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:20.961801   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:21.445443   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:21.445544   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:21.459068   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:21.945796   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:21.945896   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:21.958570   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:22.445152   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:22.445253   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:22.457988   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:22.945591   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:22.945671   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:22.963054   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:23.445607   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:23.445696   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:23.462817   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:23.945468   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:23.945568   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:23.961232   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:24.444750   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:24.444856   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:24.459010   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:24.945381   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:24.945459   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:24.960698   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:21.152655   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:21.153066   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:21.153101   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:21.153019   52435 retry.go:31] will retry after 2.655369884s: waiting for machine to come up
	I0115 10:58:23.810248   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:23.810794   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:23.810826   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:23.810736   52435 retry.go:31] will retry after 3.228354757s: waiting for machine to come up
	I0115 10:58:25.444782   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:25.444896   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:25.459092   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:25.945706   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:25.945789   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:25.962190   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:26.444752   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:26.444832   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:26.457756   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:26.945098   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:26.945192   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:26.958440   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:27.444821   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:27.444895   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:27.457448   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:27.944982   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:27.945075   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:27.957908   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:28.445745   52070 api_server.go:166] Checking apiserver status ...
	I0115 10:58:28.445817   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:58:28.461982   52070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:58:28.462013   52070 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:58:28.462026   52070 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:58:28.462038   52070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:58:28.462088   52070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:58:28.519541   52070 cri.go:89] found id: ""
	I0115 10:58:28.519618   52070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:58:28.538064   52070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:58:28.548597   52070 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:58:28.548678   52070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:58:28.558172   52070 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:58:28.558194   52070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:58:28.680645   52070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:58:29.530855   52070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:58:29.739940   52070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:58:29.864705   52070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:58:29.957133   52070 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:58:29.957212   52070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:58:27.040343   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:27.040822   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:27.040844   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:27.040786   52435 retry.go:31] will retry after 3.197695765s: waiting for machine to come up
	I0115 10:58:30.241629   52411 main.go:141] libmachine: (auto-453827) DBG | domain auto-453827 has defined MAC address 52:54:00:e1:01:26 in network mk-auto-453827
	I0115 10:58:30.242031   52411 main.go:141] libmachine: (auto-453827) DBG | unable to find current IP address of domain auto-453827 in network mk-auto-453827
	I0115 10:58:30.242085   52411 main.go:141] libmachine: (auto-453827) DBG | I0115 10:58:30.241979   52435 retry.go:31] will retry after 3.510505503s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 10:37:59 UTC, ends at Mon 2024-01-15 10:58:33 UTC. --
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.586954042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316313586944243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b6c5f8c7-948a-41f9-87a5-a190f33ffd26 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.587611567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8674abfe-9ab1-4406-9f36-9c79144e0428 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.587659874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8674abfe-9ab1-4406-9f36-9c79144e0428 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.587889244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315151754453381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d451182513357c1e4bfbc80d5edfadf8f0ccc7ec2887fba2a9baa58db9764409,PodSandboxId:f468dc0274416a9c9c141d05c0ad72abc912319290a78d1ce8b1fd2cc861c4ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315135630539318,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 453842a7-e912-4899-86dc-3ed65feee9c7,},Annotations:map[string]string{io.kubernetes.container.hash: c0713515,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2,PodSandboxId:12dc086e474cc7bdd264ff9f4e6ee8bc99035b389ca5fda26cdd09f503e4a9a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315134373353339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n59ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34777797-e585-42b7-852f-87d8bf442f6f,},Annotations:map[string]string{io.kubernetes.container.hash: a6e06c22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315121023115053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f,PodSandboxId:ffc8ca836544dd400f2e9c808bd3400d1e8ea3e1015ef9aeb3f576e821598c9a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315120922206189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqgfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0df28b2-1c
e0-40c7-b9aa-d56862f39034,},Annotations:map[string]string{io.kubernetes.container.hash: b71c5d12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b,PodSandboxId:8d6fe96efdec776643f19f0c468e12f3d46c3efa3e876b64746770db744460c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315111037211441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9cdca2e0cfac5bd
845b568e4f9f745,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5,PodSandboxId:01d7ef7398d831b8e22ef1914aefac91e95636c3a7ae965802725189bfe5b8d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315110859437640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f255ced3c8832b5eaf0bd0066f2df6,},Annotations:map[string]string{io
.kubernetes.container.hash: 48d16d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d,PodSandboxId:2d78f10957e24ace27ba093446569949023f780ce4e009bcf21ab7d025b6988c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315110727849375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39ee0e9b9e2b796514e8d1d0e7ee69,},Annotations:map[string]string{io.kubernete
s.container.hash: ce765492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc,PodSandboxId:76493771191cf8992f0b97c6651dbe178258ea1ce966791003e445452063c855,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315110683362842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19d87abe6210b88acc403e1bfc13d69c,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8674abfe-9ab1-4406-9f36-9c79144e0428 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.628548281Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bd7d4ee3-0f96-44bb-b597-49bd1d7f15af name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.628628158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bd7d4ee3-0f96-44bb-b597-49bd1d7f15af name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.630459176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2add8997-9cbb-48be-aa51-d98575c138cf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.630872649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316313630859237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2add8997-9cbb-48be-aa51-d98575c138cf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.631916636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=85d2fb4c-a7a0-4e4b-a4df-ca77bcb0dd91 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.631968747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=85d2fb4c-a7a0-4e4b-a4df-ca77bcb0dd91 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.632252023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315151754453381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d451182513357c1e4bfbc80d5edfadf8f0ccc7ec2887fba2a9baa58db9764409,PodSandboxId:f468dc0274416a9c9c141d05c0ad72abc912319290a78d1ce8b1fd2cc861c4ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315135630539318,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 453842a7-e912-4899-86dc-3ed65feee9c7,},Annotations:map[string]string{io.kubernetes.container.hash: c0713515,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2,PodSandboxId:12dc086e474cc7bdd264ff9f4e6ee8bc99035b389ca5fda26cdd09f503e4a9a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315134373353339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n59ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34777797-e585-42b7-852f-87d8bf442f6f,},Annotations:map[string]string{io.kubernetes.container.hash: a6e06c22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315121023115053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f,PodSandboxId:ffc8ca836544dd400f2e9c808bd3400d1e8ea3e1015ef9aeb3f576e821598c9a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315120922206189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqgfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0df28b2-1c
e0-40c7-b9aa-d56862f39034,},Annotations:map[string]string{io.kubernetes.container.hash: b71c5d12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b,PodSandboxId:8d6fe96efdec776643f19f0c468e12f3d46c3efa3e876b64746770db744460c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315111037211441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9cdca2e0cfac5bd
845b568e4f9f745,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5,PodSandboxId:01d7ef7398d831b8e22ef1914aefac91e95636c3a7ae965802725189bfe5b8d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315110859437640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f255ced3c8832b5eaf0bd0066f2df6,},Annotations:map[string]string{io
.kubernetes.container.hash: 48d16d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d,PodSandboxId:2d78f10957e24ace27ba093446569949023f780ce4e009bcf21ab7d025b6988c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315110727849375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39ee0e9b9e2b796514e8d1d0e7ee69,},Annotations:map[string]string{io.kubernete
s.container.hash: ce765492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc,PodSandboxId:76493771191cf8992f0b97c6651dbe178258ea1ce966791003e445452063c855,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315110683362842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19d87abe6210b88acc403e1bfc13d69c,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=85d2fb4c-a7a0-4e4b-a4df-ca77bcb0dd91 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.673134789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=80e514cd-1a1a-4125-9b82-8e01a97ad5e3 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.673215837Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=80e514cd-1a1a-4125-9b82-8e01a97ad5e3 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.676829578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2e926e64-1c5d-40f6-9038-92ed91b0b6e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.677806786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316313677787235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2e926e64-1c5d-40f6-9038-92ed91b0b6e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.679514460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fab187ad-da13-45df-ba60-0b3a5fb77f7a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.679622331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fab187ad-da13-45df-ba60-0b3a5fb77f7a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.679843139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315151754453381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d451182513357c1e4bfbc80d5edfadf8f0ccc7ec2887fba2a9baa58db9764409,PodSandboxId:f468dc0274416a9c9c141d05c0ad72abc912319290a78d1ce8b1fd2cc861c4ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315135630539318,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 453842a7-e912-4899-86dc-3ed65feee9c7,},Annotations:map[string]string{io.kubernetes.container.hash: c0713515,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2,PodSandboxId:12dc086e474cc7bdd264ff9f4e6ee8bc99035b389ca5fda26cdd09f503e4a9a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315134373353339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n59ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34777797-e585-42b7-852f-87d8bf442f6f,},Annotations:map[string]string{io.kubernetes.container.hash: a6e06c22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315121023115053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f,PodSandboxId:ffc8ca836544dd400f2e9c808bd3400d1e8ea3e1015ef9aeb3f576e821598c9a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315120922206189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqgfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0df28b2-1c
e0-40c7-b9aa-d56862f39034,},Annotations:map[string]string{io.kubernetes.container.hash: b71c5d12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b,PodSandboxId:8d6fe96efdec776643f19f0c468e12f3d46c3efa3e876b64746770db744460c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315111037211441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9cdca2e0cfac5bd
845b568e4f9f745,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5,PodSandboxId:01d7ef7398d831b8e22ef1914aefac91e95636c3a7ae965802725189bfe5b8d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315110859437640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f255ced3c8832b5eaf0bd0066f2df6,},Annotations:map[string]string{io
.kubernetes.container.hash: 48d16d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d,PodSandboxId:2d78f10957e24ace27ba093446569949023f780ce4e009bcf21ab7d025b6988c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315110727849375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39ee0e9b9e2b796514e8d1d0e7ee69,},Annotations:map[string]string{io.kubernete
s.container.hash: ce765492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc,PodSandboxId:76493771191cf8992f0b97c6651dbe178258ea1ce966791003e445452063c855,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315110683362842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19d87abe6210b88acc403e1bfc13d69c,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fab187ad-da13-45df-ba60-0b3a5fb77f7a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.718863772Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=366f9c0c-8c12-4f0e-aa0c-3cbcffcbcf33 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.718922205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=366f9c0c-8c12-4f0e-aa0c-3cbcffcbcf33 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.720337977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fe1a701c-ef81-4f11-9e8e-91485ac3042f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.720719293Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316313720706530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fe1a701c-ef81-4f11-9e8e-91485ac3042f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.721366003Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f6a837c1-f879-4304-885d-25018acbe9bd name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.721415935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f6a837c1-f879-4304-885d-25018acbe9bd name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:33 embed-certs-781270 crio[727]: time="2024-01-15 10:58:33.721597886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315151754453381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d451182513357c1e4bfbc80d5edfadf8f0ccc7ec2887fba2a9baa58db9764409,PodSandboxId:f468dc0274416a9c9c141d05c0ad72abc912319290a78d1ce8b1fd2cc861c4ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315135630539318,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 453842a7-e912-4899-86dc-3ed65feee9c7,},Annotations:map[string]string{io.kubernetes.container.hash: c0713515,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2,PodSandboxId:12dc086e474cc7bdd264ff9f4e6ee8bc99035b389ca5fda26cdd09f503e4a9a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315134373353339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n59ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34777797-e585-42b7-852f-87d8bf442f6f,},Annotations:map[string]string{io.kubernetes.container.hash: a6e06c22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c,PodSandboxId:39234a6ce36220a01eaf5da3a3762297390655cf9b541c3e38bedda33a91ee0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315121023115053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: f13c7475-31d6-4aec-9905-070fafc63afa,},Annotations:map[string]string{io.kubernetes.container.hash: b78156cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f,PodSandboxId:ffc8ca836544dd400f2e9c808bd3400d1e8ea3e1015ef9aeb3f576e821598c9a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315120922206189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqgfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0df28b2-1c
e0-40c7-b9aa-d56862f39034,},Annotations:map[string]string{io.kubernetes.container.hash: b71c5d12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b,PodSandboxId:8d6fe96efdec776643f19f0c468e12f3d46c3efa3e876b64746770db744460c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315111037211441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9cdca2e0cfac5bd
845b568e4f9f745,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5,PodSandboxId:01d7ef7398d831b8e22ef1914aefac91e95636c3a7ae965802725189bfe5b8d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315110859437640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f255ced3c8832b5eaf0bd0066f2df6,},Annotations:map[string]string{io
.kubernetes.container.hash: 48d16d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d,PodSandboxId:2d78f10957e24ace27ba093446569949023f780ce4e009bcf21ab7d025b6988c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315110727849375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39ee0e9b9e2b796514e8d1d0e7ee69,},Annotations:map[string]string{io.kubernete
s.container.hash: ce765492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc,PodSandboxId:76493771191cf8992f0b97c6651dbe178258ea1ce966791003e445452063c855,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315110683362842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-781270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19d87abe6210b88acc403e1bfc13d69c,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f6a837c1-f879-4304-885d-25018acbe9bd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	111601a6dd351       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   39234a6ce3622       storage-provisioner
	d451182513357       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   f468dc0274416       busybox
	36c0765390486       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      19 minutes ago      Running             coredns                   1                   12dc086e474cc       coredns-5dd5756b68-n59ft
	6abb26467c971       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   39234a6ce3622       storage-provisioner
	6f792de826409       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      19 minutes ago      Running             kube-proxy                1                   ffc8ca836544d       kube-proxy-jqgfc
	fd8643f05eca8       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      20 minutes ago      Running             kube-scheduler            1                   8d6fe96efdec7       kube-scheduler-embed-certs-781270
	30a66dab34a57       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      20 minutes ago      Running             etcd                      1                   01d7ef7398d83       etcd-embed-certs-781270
	4dcae24d7ff7b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      20 minutes ago      Running             kube-apiserver            1                   2d78f10957e24       kube-apiserver-embed-certs-781270
	4095240514ca1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      20 minutes ago      Running             kube-controller-manager   1                   76493771191cf       kube-controller-manager-embed-certs-781270
	
	
	==> coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51118 - 8823 "HINFO IN 3301450306179273962.8606541448989940442. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037830463s
	
	
	==> describe nodes <==
	Name:               embed-certs-781270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-781270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=embed-certs-781270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T10_29_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:29:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-781270
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 10:58:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:54:25 +0000   Mon, 15 Jan 2024 10:29:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:54:25 +0000   Mon, 15 Jan 2024 10:29:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:54:25 +0000   Mon, 15 Jan 2024 10:29:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:54:25 +0000   Mon, 15 Jan 2024 10:38:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.222
	  Hostname:    embed-certs-781270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f39339401eb64c2ab4869bf492441844
	  System UUID:                f3933940-1eb6-4c2a-b486-9bf492441844
	  Boot ID:                    4f91d199-0378-4e0d-9609-e343b27e2bad
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-n59ft                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-781270                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-781270             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-781270    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-jqgfc                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-781270             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-wxclh               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-781270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-781270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-781270 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-781270 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-781270 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node embed-certs-781270 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-781270 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-781270 event: Registered Node embed-certs-781270 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-781270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-781270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-781270 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-781270 event: Registered Node embed-certs-781270 in Controller
	
	
	==> dmesg <==
	[Jan15 10:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071460] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.577695] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.504945] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149130] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan15 10:38] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.481016] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.105353] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.158668] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.105586] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.237391] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +17.942102] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +22.122069] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] <==
	{"level":"warn","ts":"2024-01-15T10:38:39.841447Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:38:38.893328Z","time spent":"948.109618ms","remote":"127.0.0.1:50872","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-01-15T10:38:39.841692Z","caller":"traceutil/trace.go:171","msg":"trace[1441359675] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"278.864064ms","start":"2024-01-15T10:38:39.562819Z","end":"2024-01-15T10:38:39.841683Z","steps":["trace[1441359675] 'process raft request'  (duration: 278.347753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:39.841893Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"914.361004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:38:39.84192Z","caller":"traceutil/trace.go:171","msg":"trace[1144079903] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:571; }","duration":"914.39148ms","start":"2024-01-15T10:38:38.92752Z","end":"2024-01-15T10:38:39.841912Z","steps":["trace[1144079903] 'agreement among raft nodes before linearized reading'  (duration: 914.339783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:39.84194Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:38:38.927506Z","time spent":"914.429656ms","remote":"127.0.0.1:50874","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-01-15T10:38:40.033941Z","caller":"traceutil/trace.go:171","msg":"trace[19813570] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"181.015704ms","start":"2024-01-15T10:38:39.852904Z","end":"2024-01-15T10:38:40.03392Z","steps":["trace[19813570] 'process raft request'  (duration: 178.341384ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:40.034593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.965619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-edit\" ","response":"range_response_count:1 size:2025"}
	{"level":"info","ts":"2024-01-15T10:38:40.034682Z","caller":"traceutil/trace.go:171","msg":"trace[993406204] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:1; response_revision:572; }","duration":"180.066826ms","start":"2024-01-15T10:38:39.854604Z","end":"2024-01-15T10:38:40.034671Z","steps":["trace[993406204] 'agreement among raft nodes before linearized reading'  (duration: 179.864193ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:38:40.034399Z","caller":"traceutil/trace.go:171","msg":"trace[1360197727] linearizableReadLoop","detail":"{readStateIndex:601; appliedIndex:600; }","duration":"179.220921ms","start":"2024-01-15T10:38:39.854622Z","end":"2024-01-15T10:38:40.033843Z","steps":["trace[1360197727] 'read index received'  (duration: 176.526673ms)","trace[1360197727] 'applied index is now lower than readState.Index'  (duration: 2.692911ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:38:40.039445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.98729ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:38:40.039795Z","caller":"traceutil/trace.go:171","msg":"trace[690659837] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:573; }","duration":"143.346301ms","start":"2024-01-15T10:38:39.896436Z","end":"2024-01-15T10:38:40.039783Z","steps":["trace[690659837] 'agreement among raft nodes before linearized reading'  (duration: 142.926917ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:38:40.039539Z","caller":"traceutil/trace.go:171","msg":"trace[2017555971] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"179.836449ms","start":"2024-01-15T10:38:39.85969Z","end":"2024-01-15T10:38:40.039526Z","steps":["trace[2017555971] 'process raft request'  (duration: 179.592239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:38:40.039712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.954294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:38:40.048836Z","caller":"traceutil/trace.go:171","msg":"trace[1403278608] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:573; }","duration":"121.077066ms","start":"2024-01-15T10:38:39.927747Z","end":"2024-01-15T10:38:40.048824Z","steps":["trace[1403278608] 'agreement among raft nodes before linearized reading'  (duration: 111.940116ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:48:34.839423Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":912}
	{"level":"info","ts":"2024-01-15T10:48:34.843566Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":912,"took":"3.608882ms","hash":3628706100}
	{"level":"info","ts":"2024-01-15T10:48:34.843646Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3628706100,"revision":912,"compact-revision":-1}
	{"level":"info","ts":"2024-01-15T10:53:34.847654Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1154}
	{"level":"info","ts":"2024-01-15T10:53:34.849802Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1154,"took":"1.345184ms","hash":148062562}
	{"level":"info","ts":"2024-01-15T10:53:34.849893Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":148062562,"revision":1154,"compact-revision":912}
	{"level":"warn","ts":"2024-01-15T10:57:10.921315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.212403ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10619924482008741496 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.222\" mod_revision:1565 > success:<request_put:<key:\"/registry/masterleases/192.168.72.222\" value_size:67 lease:1396552445153965686 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.222\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-15T10:57:10.921535Z","caller":"traceutil/trace.go:171","msg":"trace[1008444013] transaction","detail":"{read_only:false; response_revision:1573; number_of_response:1; }","duration":"293.333183ms","start":"2024-01-15T10:57:10.628168Z","end":"2024-01-15T10:57:10.921502Z","steps":["trace[1008444013] 'process raft request'  (duration: 128.537272ms)","trace[1008444013] 'compare'  (duration: 164.066569ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:57:11.330096Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.754242ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-15T10:57:11.330301Z","caller":"traceutil/trace.go:171","msg":"trace[1281339118] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:1573; }","duration":"249.929885ms","start":"2024-01-15T10:57:11.08031Z","end":"2024-01-15T10:57:11.33024Z","steps":["trace[1281339118] 'count revisions from in-memory index tree'  (duration: 249.672423ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:57:11.523087Z","caller":"traceutil/trace.go:171","msg":"trace[1972129193] transaction","detail":"{read_only:false; response_revision:1574; number_of_response:1; }","duration":"113.39899ms","start":"2024-01-15T10:57:11.409562Z","end":"2024-01-15T10:57:11.522961Z","steps":["trace[1972129193] 'process raft request'  (duration: 113.264299ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:58:34 up 20 min,  0 users,  load average: 0.10, 0.21, 0.18
	Linux embed-certs-781270 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] <==
	W0115 10:53:37.664133       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:53:37.664255       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:53:37.664326       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:53:37.664200       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:53:37.664465       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:53:37.665770       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:54:36.513748       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:54:37.665367       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:54:37.665430       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:54:37.665438       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:54:37.666654       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:54:37.666722       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:54:37.666730       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:55:36.513727       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0115 10:56:36.512886       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:56:37.666450       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:56:37.666651       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:56:37.666695       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:56:37.666893       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:56:37.667215       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:56:37.668830       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:57:36.513844       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] <==
	I0115 10:52:51.924328       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:53:21.473470       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:53:21.932724       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:53:51.479590       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:53:51.941209       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:54:21.485965       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:54:21.949943       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:54:51.492673       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:54:51.958824       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0115 10:55:06.527415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="248.742µs"
	E0115 10:55:21.499246       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:55:21.530896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="194.179µs"
	I0115 10:55:21.968371       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:55:51.505691       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:55:51.977105       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:56:21.515739       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:56:21.986256       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:56:51.522579       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:56:51.994933       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:57:21.529884       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:57:22.005877       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:57:51.537392       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:57:52.018263       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:58:21.542961       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:58:22.028720       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] <==
	I0115 10:38:41.215364       1 server_others.go:69] "Using iptables proxy"
	I0115 10:38:41.231124       1 node.go:141] Successfully retrieved node IP: 192.168.72.222
	I0115 10:38:41.284972       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 10:38:41.285113       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 10:38:41.287793       1 server_others.go:152] "Using iptables Proxier"
	I0115 10:38:41.287867       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 10:38:41.288117       1 server.go:846] "Version info" version="v1.28.4"
	I0115 10:38:41.288297       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:38:41.288967       1 config.go:188] "Starting service config controller"
	I0115 10:38:41.289112       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 10:38:41.289151       1 config.go:97] "Starting endpoint slice config controller"
	I0115 10:38:41.289167       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 10:38:41.291381       1 config.go:315] "Starting node config controller"
	I0115 10:38:41.291417       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 10:38:41.389733       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 10:38:41.389806       1 shared_informer.go:318] Caches are synced for service config
	I0115 10:38:41.391888       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] <==
	I0115 10:38:33.832228       1 serving.go:348] Generated self-signed cert in-memory
	W0115 10:38:36.643592       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0115 10:38:36.643731       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 10:38:36.643742       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0115 10:38:36.643748       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0115 10:38:36.678245       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0115 10:38:36.678321       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:38:36.679681       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0115 10:38:36.679741       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 10:38:36.680475       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0115 10:38:36.680572       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0115 10:38:36.780756       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 10:37:59 UTC, ends at Mon 2024-01-15 10:58:34 UTC. --
	Jan 15 10:56:15 embed-certs-781270 kubelet[933]: E0115 10:56:15.511096     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:56:26 embed-certs-781270 kubelet[933]: E0115 10:56:26.510545     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:56:29 embed-certs-781270 kubelet[933]: E0115 10:56:29.530514     933 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:56:29 embed-certs-781270 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:56:29 embed-certs-781270 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:56:29 embed-certs-781270 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:56:37 embed-certs-781270 kubelet[933]: E0115 10:56:37.510891     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:56:51 embed-certs-781270 kubelet[933]: E0115 10:56:51.511081     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:57:02 embed-certs-781270 kubelet[933]: E0115 10:57:02.510635     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:57:13 embed-certs-781270 kubelet[933]: E0115 10:57:13.511216     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:57:28 embed-certs-781270 kubelet[933]: E0115 10:57:28.510782     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:57:29 embed-certs-781270 kubelet[933]: E0115 10:57:29.530354     933 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:57:29 embed-certs-781270 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:57:29 embed-certs-781270 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:57:29 embed-certs-781270 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:57:41 embed-certs-781270 kubelet[933]: E0115 10:57:41.510740     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:57:54 embed-certs-781270 kubelet[933]: E0115 10:57:54.510790     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:58:08 embed-certs-781270 kubelet[933]: E0115 10:58:08.511213     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:58:21 embed-certs-781270 kubelet[933]: E0115 10:58:21.512396     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	Jan 15 10:58:29 embed-certs-781270 kubelet[933]: E0115 10:58:29.495801     933 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 15 10:58:29 embed-certs-781270 kubelet[933]: E0115 10:58:29.533793     933 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:58:29 embed-certs-781270 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:58:29 embed-certs-781270 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:58:29 embed-certs-781270 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:58:33 embed-certs-781270 kubelet[933]: E0115 10:58:33.512612     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wxclh" podUID="2a52a963-a5dd-4ead-8da3-0d502c2c96ed"
	
	
	==> storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] <==
	I0115 10:39:11.876332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 10:39:11.895209       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 10:39:11.896434       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 10:39:29.309564       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 10:39:29.310223       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-781270_520f755a-e5fb-4fbd-936e-e4cf1c80df28!
	I0115 10:39:29.311432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"803c8693-7968-4a63-9365-703529c42c62", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-781270_520f755a-e5fb-4fbd-936e-e4cf1c80df28 became leader
	I0115 10:39:29.411166       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-781270_520f755a-e5fb-4fbd-936e-e4cf1c80df28!
	
	
	==> storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] <==
	I0115 10:38:41.211669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0115 10:39:11.214605       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-781270 -n embed-certs-781270
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-781270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wxclh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-781270 describe pod metrics-server-57f55c9bc5-wxclh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-781270 describe pod metrics-server-57f55c9bc5-wxclh: exit status 1 (68.498465ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wxclh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-781270 describe pod metrics-server-57f55c9bc5-wxclh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (382.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (495.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-15 11:00:39.768839108 +0000 UTC m=+5651.740006516
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-709012 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-709012 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.515µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-709012 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-709012 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-709012 logs -n 25: (1.303959963s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-453827 sudo cat                              | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo cat                              | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-453827 sudo                               | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-453827 sudo                               | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo                                  | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-453827 sudo cat                           | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo systemctl                        | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo systemctl                        | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p kindnet-453827 sudo cat                           | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo cat                              | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-453827 sudo                               | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo cat                              | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-453827 sudo                               | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo containerd                       | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | config dump                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-453827 sudo                               | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo systemctl                        | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-453827 sudo find                          | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo systemctl                        | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p kindnet-453827 sudo crio                          | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| ssh     | -p auto-453827 sudo find                             | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| delete  | -p kindnet-453827                                    | kindnet-453827            | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	| ssh     | -p auto-453827 sudo crio                             | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-453827                                       | auto-453827               | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC | 15 Jan 24 11:00 UTC |
	| start   | -p custom-flannel-453827                             | custom-flannel-453827     | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p enable-default-cni-453827                         | enable-default-cni-453827 | jenkins | v1.32.0 | 15 Jan 24 11:00 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:00:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:00:24.810067   56863 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:00:24.810216   56863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:00:24.810229   56863 out.go:309] Setting ErrFile to fd 2...
	I0115 11:00:24.810236   56863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:00:24.810406   56863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 11:00:24.810992   56863 out.go:303] Setting JSON to false
	I0115 11:00:24.812086   56863 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6125,"bootTime":1705310300,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:00:24.812151   56863 start.go:138] virtualization: kvm guest
	I0115 11:00:24.814619   56863 out.go:177] * [enable-default-cni-453827] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 11:00:24.816402   56863 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 11:00:24.816407   56863 notify.go:220] Checking for updates...
	I0115 11:00:24.817759   56863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:00:24.819021   56863 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 11:00:24.820295   56863 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 11:00:24.821623   56863 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 11:00:24.823056   56863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:00:24.825188   56863 config.go:182] Loaded profile config "calico-453827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:00:24.825333   56863 config.go:182] Loaded profile config "custom-flannel-453827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:00:24.825475   56863 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:00:24.825607   56863 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:00:24.861036   56863 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 11:00:24.862544   56863 start.go:298] selected driver: kvm2
	I0115 11:00:24.862556   56863 start.go:902] validating driver "kvm2" against <nil>
	I0115 11:00:24.862565   56863 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:00:24.863513   56863 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:00:24.863586   56863 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 11:00:24.880865   56863 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 11:00:24.880906   56863 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	E0115 11:00:24.881151   56863 start_flags.go:463] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0115 11:00:24.881175   56863 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 11:00:24.881225   56863 cni.go:84] Creating CNI manager for "bridge"
	I0115 11:00:24.881238   56863 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 11:00:24.881246   56863 start_flags.go:321] config:
	{Name:enable-default-cni-453827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-453827 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:00:24.883718   56863 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:00:24.886212   56863 out.go:177] * Starting control plane node enable-default-cni-453827 in cluster enable-default-cni-453827
	I0115 11:00:21.962038   53633 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-wqvqf" in "kube-system" namespace has status "Ready":"False"
	I0115 11:00:23.963635   53633 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-wqvqf" in "kube-system" namespace has status "Ready":"False"
	I0115 11:00:24.544511   56769 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0115 11:00:24.544721   56769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 11:00:24.544771   56769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:00:24.558612   56769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I0115 11:00:24.559114   56769 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:00:24.559784   56769 main.go:141] libmachine: Using API Version  1
	I0115 11:00:24.559810   56769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:00:24.560171   56769 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:00:24.560386   56769 main.go:141] libmachine: (custom-flannel-453827) Calling .GetMachineName
	I0115 11:00:24.560545   56769 main.go:141] libmachine: (custom-flannel-453827) Calling .DriverName
	I0115 11:00:24.560685   56769 start.go:159] libmachine.API.Create for "custom-flannel-453827" (driver="kvm2")
	I0115 11:00:24.560713   56769 client.go:168] LocalClient.Create starting
	I0115 11:00:24.560747   56769 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem
	I0115 11:00:24.560784   56769 main.go:141] libmachine: Decoding PEM data...
	I0115 11:00:24.560813   56769 main.go:141] libmachine: Parsing certificate...
	I0115 11:00:24.560888   56769 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem
	I0115 11:00:24.560914   56769 main.go:141] libmachine: Decoding PEM data...
	I0115 11:00:24.560933   56769 main.go:141] libmachine: Parsing certificate...
	I0115 11:00:24.560957   56769 main.go:141] libmachine: Running pre-create checks...
	I0115 11:00:24.560964   56769 main.go:141] libmachine: (custom-flannel-453827) Calling .PreCreateCheck
	I0115 11:00:24.561403   56769 main.go:141] libmachine: (custom-flannel-453827) Calling .GetConfigRaw
	I0115 11:00:24.561868   56769 main.go:141] libmachine: Creating machine...
	I0115 11:00:24.561886   56769 main.go:141] libmachine: (custom-flannel-453827) Calling .Create
	I0115 11:00:24.562036   56769 main.go:141] libmachine: (custom-flannel-453827) Creating KVM machine...
	I0115 11:00:24.563425   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | found existing default KVM network
	I0115 11:00:24.564621   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:24.564421   56820 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:96:06:12} reservation:<nil>}
	I0115 11:00:24.565654   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:24.565579   56820 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205130}
	I0115 11:00:24.571442   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | trying to create private KVM network mk-custom-flannel-453827 192.168.50.0/24...
	I0115 11:00:24.656303   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | private KVM network mk-custom-flannel-453827 192.168.50.0/24 created
	I0115 11:00:24.656352   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:24.656269   56820 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 11:00:24.656375   56769 main.go:141] libmachine: (custom-flannel-453827) Setting up store path in /home/jenkins/minikube-integration/17953-4821/.minikube/machines/custom-flannel-453827 ...
	I0115 11:00:24.656401   56769 main.go:141] libmachine: (custom-flannel-453827) Building disk image from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 11:00:24.656431   56769 main.go:141] libmachine: (custom-flannel-453827) Downloading /home/jenkins/minikube-integration/17953-4821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 11:00:24.914337   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:24.914209   56820 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/custom-flannel-453827/id_rsa...
	I0115 11:00:25.050078   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:25.049942   56820 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/custom-flannel-453827/custom-flannel-453827.rawdisk...
	I0115 11:00:25.050112   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Writing magic tar header
	I0115 11:00:25.050130   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Writing SSH key tar header
	I0115 11:00:25.050193   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:25.050120   56820 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/custom-flannel-453827 ...
	I0115 11:00:25.050272   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/custom-flannel-453827
	I0115 11:00:25.050302   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube/machines
	I0115 11:00:25.050312   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 11:00:25.050329   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17953-4821
	I0115 11:00:25.050350   56769 main.go:141] libmachine: (custom-flannel-453827) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines/custom-flannel-453827 (perms=drwx------)
	I0115 11:00:25.050366   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 11:00:25.050381   56769 main.go:141] libmachine: (custom-flannel-453827) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube/machines (perms=drwxr-xr-x)
	I0115 11:00:25.050406   56769 main.go:141] libmachine: (custom-flannel-453827) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821/.minikube (perms=drwxr-xr-x)
	I0115 11:00:25.050437   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Checking permissions on dir: /home/jenkins
	I0115 11:00:25.050450   56769 main.go:141] libmachine: (custom-flannel-453827) Setting executable bit set on /home/jenkins/minikube-integration/17953-4821 (perms=drwxrwxr-x)
	I0115 11:00:25.050466   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Checking permissions on dir: /home
	I0115 11:00:25.050476   56769 main.go:141] libmachine: (custom-flannel-453827) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 11:00:25.050494   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | Skipping /home - not owner
	I0115 11:00:25.050509   56769 main.go:141] libmachine: (custom-flannel-453827) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 11:00:25.050519   56769 main.go:141] libmachine: (custom-flannel-453827) Creating domain...
	I0115 11:00:25.051902   56769 main.go:141] libmachine: (custom-flannel-453827) define libvirt domain using xml: 
	I0115 11:00:25.051932   56769 main.go:141] libmachine: (custom-flannel-453827) <domain type='kvm'>
	I0115 11:00:25.051948   56769 main.go:141] libmachine: (custom-flannel-453827)   <name>custom-flannel-453827</name>
	I0115 11:00:25.051957   56769 main.go:141] libmachine: (custom-flannel-453827)   <memory unit='MiB'>3072</memory>
	I0115 11:00:25.051964   56769 main.go:141] libmachine: (custom-flannel-453827)   <vcpu>2</vcpu>
	I0115 11:00:25.051969   56769 main.go:141] libmachine: (custom-flannel-453827)   <features>
	I0115 11:00:25.051975   56769 main.go:141] libmachine: (custom-flannel-453827)     <acpi/>
	I0115 11:00:25.051980   56769 main.go:141] libmachine: (custom-flannel-453827)     <apic/>
	I0115 11:00:25.051986   56769 main.go:141] libmachine: (custom-flannel-453827)     <pae/>
	I0115 11:00:25.051991   56769 main.go:141] libmachine: (custom-flannel-453827)     
	I0115 11:00:25.051997   56769 main.go:141] libmachine: (custom-flannel-453827)   </features>
	I0115 11:00:25.052002   56769 main.go:141] libmachine: (custom-flannel-453827)   <cpu mode='host-passthrough'>
	I0115 11:00:25.052007   56769 main.go:141] libmachine: (custom-flannel-453827)   
	I0115 11:00:25.052021   56769 main.go:141] libmachine: (custom-flannel-453827)   </cpu>
	I0115 11:00:25.052058   56769 main.go:141] libmachine: (custom-flannel-453827)   <os>
	I0115 11:00:25.052086   56769 main.go:141] libmachine: (custom-flannel-453827)     <type>hvm</type>
	I0115 11:00:25.052101   56769 main.go:141] libmachine: (custom-flannel-453827)     <boot dev='cdrom'/>
	I0115 11:00:25.052113   56769 main.go:141] libmachine: (custom-flannel-453827)     <boot dev='hd'/>
	I0115 11:00:25.052127   56769 main.go:141] libmachine: (custom-flannel-453827)     <bootmenu enable='no'/>
	I0115 11:00:25.052135   56769 main.go:141] libmachine: (custom-flannel-453827)   </os>
	I0115 11:00:25.052156   56769 main.go:141] libmachine: (custom-flannel-453827)   <devices>
	I0115 11:00:25.052165   56769 main.go:141] libmachine: (custom-flannel-453827)     <disk type='file' device='cdrom'>
	I0115 11:00:25.052195   56769 main.go:141] libmachine: (custom-flannel-453827)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/custom-flannel-453827/boot2docker.iso'/>
	I0115 11:00:25.052219   56769 main.go:141] libmachine: (custom-flannel-453827)       <target dev='hdc' bus='scsi'/>
	I0115 11:00:25.052231   56769 main.go:141] libmachine: (custom-flannel-453827)       <readonly/>
	I0115 11:00:25.052243   56769 main.go:141] libmachine: (custom-flannel-453827)     </disk>
	I0115 11:00:25.052255   56769 main.go:141] libmachine: (custom-flannel-453827)     <disk type='file' device='disk'>
	I0115 11:00:25.052264   56769 main.go:141] libmachine: (custom-flannel-453827)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 11:00:25.052276   56769 main.go:141] libmachine: (custom-flannel-453827)       <source file='/home/jenkins/minikube-integration/17953-4821/.minikube/machines/custom-flannel-453827/custom-flannel-453827.rawdisk'/>
	I0115 11:00:25.052298   56769 main.go:141] libmachine: (custom-flannel-453827)       <target dev='hda' bus='virtio'/>
	I0115 11:00:25.052312   56769 main.go:141] libmachine: (custom-flannel-453827)     </disk>
	I0115 11:00:25.052324   56769 main.go:141] libmachine: (custom-flannel-453827)     <interface type='network'>
	I0115 11:00:25.052339   56769 main.go:141] libmachine: (custom-flannel-453827)       <source network='mk-custom-flannel-453827'/>
	I0115 11:00:25.052352   56769 main.go:141] libmachine: (custom-flannel-453827)       <model type='virtio'/>
	I0115 11:00:25.052365   56769 main.go:141] libmachine: (custom-flannel-453827)     </interface>
	I0115 11:00:25.052374   56769 main.go:141] libmachine: (custom-flannel-453827)     <interface type='network'>
	I0115 11:00:25.052381   56769 main.go:141] libmachine: (custom-flannel-453827)       <source network='default'/>
	I0115 11:00:25.052393   56769 main.go:141] libmachine: (custom-flannel-453827)       <model type='virtio'/>
	I0115 11:00:25.052405   56769 main.go:141] libmachine: (custom-flannel-453827)     </interface>
	I0115 11:00:25.052420   56769 main.go:141] libmachine: (custom-flannel-453827)     <serial type='pty'>
	I0115 11:00:25.052432   56769 main.go:141] libmachine: (custom-flannel-453827)       <target port='0'/>
	I0115 11:00:25.052448   56769 main.go:141] libmachine: (custom-flannel-453827)     </serial>
	I0115 11:00:25.052460   56769 main.go:141] libmachine: (custom-flannel-453827)     <console type='pty'>
	I0115 11:00:25.052473   56769 main.go:141] libmachine: (custom-flannel-453827)       <target type='serial' port='0'/>
	I0115 11:00:25.052486   56769 main.go:141] libmachine: (custom-flannel-453827)     </console>
	I0115 11:00:25.052495   56769 main.go:141] libmachine: (custom-flannel-453827)     <rng model='virtio'>
	I0115 11:00:25.052510   56769 main.go:141] libmachine: (custom-flannel-453827)       <backend model='random'>/dev/random</backend>
	I0115 11:00:25.052534   56769 main.go:141] libmachine: (custom-flannel-453827)     </rng>
	I0115 11:00:25.052547   56769 main.go:141] libmachine: (custom-flannel-453827)     
	I0115 11:00:25.052559   56769 main.go:141] libmachine: (custom-flannel-453827)     
	I0115 11:00:25.052571   56769 main.go:141] libmachine: (custom-flannel-453827)   </devices>
	I0115 11:00:25.052579   56769 main.go:141] libmachine: (custom-flannel-453827) </domain>
	I0115 11:00:25.052591   56769 main.go:141] libmachine: (custom-flannel-453827) 
	I0115 11:00:25.057459   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:39:af:c5 in network default
	I0115 11:00:25.058067   56769 main.go:141] libmachine: (custom-flannel-453827) Ensuring networks are active...
	I0115 11:00:25.058113   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:25.058807   56769 main.go:141] libmachine: (custom-flannel-453827) Ensuring network default is active
	I0115 11:00:25.059113   56769 main.go:141] libmachine: (custom-flannel-453827) Ensuring network mk-custom-flannel-453827 is active
	I0115 11:00:25.059662   56769 main.go:141] libmachine: (custom-flannel-453827) Getting domain xml...
	I0115 11:00:25.060440   56769 main.go:141] libmachine: (custom-flannel-453827) Creating domain...
	I0115 11:00:26.371647   56769 main.go:141] libmachine: (custom-flannel-453827) Waiting to get IP...
	I0115 11:00:26.372594   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:26.373012   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:26.373033   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:26.372992   56820 retry.go:31] will retry after 312.419908ms: waiting for machine to come up
	I0115 11:00:26.687606   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:26.688176   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:26.688197   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:26.688111   56820 retry.go:31] will retry after 267.034859ms: waiting for machine to come up
	I0115 11:00:26.956822   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:26.957395   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:26.957429   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:26.957377   56820 retry.go:31] will retry after 441.725403ms: waiting for machine to come up
	I0115 11:00:27.400562   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:27.401004   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:27.401036   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:27.400949   56820 retry.go:31] will retry after 497.466333ms: waiting for machine to come up
	I0115 11:00:27.899602   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:27.900175   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:27.900209   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:27.900133   56820 retry.go:31] will retry after 692.13023ms: waiting for machine to come up
	I0115 11:00:28.594035   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:28.594608   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:28.594646   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:28.594569   56820 retry.go:31] will retry after 949.864653ms: waiting for machine to come up
	I0115 11:00:24.887782   56863 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 11:00:24.887813   56863 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 11:00:24.887818   56863 cache.go:56] Caching tarball of preloaded images
	I0115 11:00:24.887923   56863 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 11:00:24.887937   56863 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 11:00:24.888030   56863 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/enable-default-cni-453827/config.json ...
	I0115 11:00:24.888048   56863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/enable-default-cni-453827/config.json: {Name:mkaf3dea06325225d20a47639d4159013c2eff56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:00:24.888206   56863 start.go:365] acquiring machines lock for enable-default-cni-453827: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 11:00:26.466962   53633 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-wqvqf" in "kube-system" namespace has status "Ready":"False"
	I0115 11:00:28.473338   53633 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-wqvqf" in "kube-system" namespace has status "Ready":"False"
	I0115 11:00:30.478113   53633 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-wqvqf" in "kube-system" namespace has status "Ready":"False"
	I0115 11:00:29.545529   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:29.545961   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:29.546001   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:29.545929   56820 retry.go:31] will retry after 858.478587ms: waiting for machine to come up
	I0115 11:00:30.406354   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:30.406855   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:30.406881   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:30.406819   56820 retry.go:31] will retry after 1.349436974s: waiting for machine to come up
	I0115 11:00:31.758336   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:31.758963   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:31.758996   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:31.758903   56820 retry.go:31] will retry after 1.473274988s: waiting for machine to come up
	I0115 11:00:33.233391   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:33.233868   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:33.233897   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:33.233827   56820 retry.go:31] will retry after 2.037833092s: waiting for machine to come up
	I0115 11:00:32.963937   53633 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-wqvqf" in "kube-system" namespace has status "Ready":"False"
	I0115 11:00:34.964345   53633 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-wqvqf" in "kube-system" namespace has status "Ready":"False"
	I0115 11:00:35.273012   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:35.273571   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:35.273599   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:35.273513   56820 retry.go:31] will retry after 2.609216307s: waiting for machine to come up
	I0115 11:00:37.884325   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | domain custom-flannel-453827 has defined MAC address 52:54:00:81:db:2a in network mk-custom-flannel-453827
	I0115 11:00:37.884810   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | unable to find current IP address of domain custom-flannel-453827 in network mk-custom-flannel-453827
	I0115 11:00:37.884832   56769 main.go:141] libmachine: (custom-flannel-453827) DBG | I0115 11:00:37.884782   56820 retry.go:31] will retry after 3.637520596s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 10:38:22 UTC, ends at Mon 2024-01-15 11:00:40 UTC. --
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.475188272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316440475175209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=dadcffc1-234e-4983-a588-04e9646f5621 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.476006503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a0fef9c5-c30d-47b6-b49b-b7fbdc09f7f9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.476074522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a0fef9c5-c30d-47b6-b49b-b7fbdc09f7f9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.476267978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315169264901792,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb769c7c010d829f4a23377df93c90e8bf1c5599a00fa995b9e52c91ccd0a71,PodSandboxId:872614188e424c68f8544d6d3b4d129e26a127481dfaa6e658f7b710a782fa06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315146835872252,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a87a22c-0769-4d2b-9e34-04682f1975ea,},Annotations:map[string]string{io.kubernetes.container.hash: c471276c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a,PodSandboxId:f6de38c7f39c76235b94888d1d6774b6bcbdccf73d0ea139d4c7b2afba9c0f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315145691263405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dzd2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d078727-4275-4308-9206-b471ce7aa586,},Annotations:map[string]string{io.kubernetes.container.hash: c46c6fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f,PodSandboxId:ee835a4d0288441cf11f407222f006052c6d629bc11183a85cdc330cebadafd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315138144703821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d8lcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9e68bc58-e11b-4534-9164-eb1b115b1721,},Annotations:map[string]string{io.kubernetes.container.hash: efdf6691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315138066104909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022,PodSandboxId:66c48b48683c99d5068d56ed106df3e5f7f6e834aead734e1159392d47e68c67,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315131889306823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9463f414b3141e35d9e5ee6b8849a92,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8,PodSandboxId:b53717eff7abcea451cd24470987c6568f3df4e69937de8feb9778733f2b5018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315131394714539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585f9295812ba39422526be195c682df,},An
notations:map[string]string{io.kubernetes.container.hash: 5a6c0eb7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045,PodSandboxId:92fecba08bfb9f159db945e1f104c4da980603343ccc4338778f92db9d3ba87c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315131301555229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
57f9ebf45379653db2ca34fe521c184,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6,PodSandboxId:c8580dda7b40819e74bf6f95fae2d4961417c540cc83aae479676326a12da494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315131282069604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
af996f03f060971a07c47ab7207a249,},Annotations:map[string]string{io.kubernetes.container.hash: 93acb490,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a0fef9c5-c30d-47b6-b49b-b7fbdc09f7f9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.520193663Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5e80de44-25b9-417a-a131-bf53e276fc13 name=/runtime.v1.RuntimeService/Version
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.520280538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5e80de44-25b9-417a-a131-bf53e276fc13 name=/runtime.v1.RuntimeService/Version
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.522727983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=81eeb4f3-3f0a-4d2d-9fa8-51cdec100dbb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.523165428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316440523149958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=81eeb4f3-3f0a-4d2d-9fa8-51cdec100dbb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.524215027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=40dc6cbf-9f41-4c6a-9e86-99220a5ba63f name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.524288769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=40dc6cbf-9f41-4c6a-9e86-99220a5ba63f name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.524578757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315169264901792,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb769c7c010d829f4a23377df93c90e8bf1c5599a00fa995b9e52c91ccd0a71,PodSandboxId:872614188e424c68f8544d6d3b4d129e26a127481dfaa6e658f7b710a782fa06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315146835872252,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a87a22c-0769-4d2b-9e34-04682f1975ea,},Annotations:map[string]string{io.kubernetes.container.hash: c471276c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a,PodSandboxId:f6de38c7f39c76235b94888d1d6774b6bcbdccf73d0ea139d4c7b2afba9c0f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315145691263405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dzd2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d078727-4275-4308-9206-b471ce7aa586,},Annotations:map[string]string{io.kubernetes.container.hash: c46c6fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f,PodSandboxId:ee835a4d0288441cf11f407222f006052c6d629bc11183a85cdc330cebadafd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315138144703821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d8lcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9e68bc58-e11b-4534-9164-eb1b115b1721,},Annotations:map[string]string{io.kubernetes.container.hash: efdf6691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315138066104909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022,PodSandboxId:66c48b48683c99d5068d56ed106df3e5f7f6e834aead734e1159392d47e68c67,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315131889306823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9463f414b3141e35d9e5ee6b8849a92,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8,PodSandboxId:b53717eff7abcea451cd24470987c6568f3df4e69937de8feb9778733f2b5018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315131394714539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585f9295812ba39422526be195c682df,},An
notations:map[string]string{io.kubernetes.container.hash: 5a6c0eb7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045,PodSandboxId:92fecba08bfb9f159db945e1f104c4da980603343ccc4338778f92db9d3ba87c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315131301555229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
57f9ebf45379653db2ca34fe521c184,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6,PodSandboxId:c8580dda7b40819e74bf6f95fae2d4961417c540cc83aae479676326a12da494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315131282069604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
af996f03f060971a07c47ab7207a249,},Annotations:map[string]string{io.kubernetes.container.hash: 93acb490,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=40dc6cbf-9f41-4c6a-9e86-99220a5ba63f name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.570623716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d1d678fc-4f86-47a7-b72f-20e9f25e3bdd name=/runtime.v1.RuntimeService/Version
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.570678737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d1d678fc-4f86-47a7-b72f-20e9f25e3bdd name=/runtime.v1.RuntimeService/Version
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.572708480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4827c791-bfa7-4aaa-afa4-5709eacbce83 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.573064406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316440573053836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4827c791-bfa7-4aaa-afa4-5709eacbce83 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.573620558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2960618f-41bc-4bea-a831-882d1b9bc771 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.573660751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2960618f-41bc-4bea-a831-882d1b9bc771 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.573846864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315169264901792,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb769c7c010d829f4a23377df93c90e8bf1c5599a00fa995b9e52c91ccd0a71,PodSandboxId:872614188e424c68f8544d6d3b4d129e26a127481dfaa6e658f7b710a782fa06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315146835872252,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a87a22c-0769-4d2b-9e34-04682f1975ea,},Annotations:map[string]string{io.kubernetes.container.hash: c471276c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a,PodSandboxId:f6de38c7f39c76235b94888d1d6774b6bcbdccf73d0ea139d4c7b2afba9c0f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315145691263405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dzd2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d078727-4275-4308-9206-b471ce7aa586,},Annotations:map[string]string{io.kubernetes.container.hash: c46c6fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f,PodSandboxId:ee835a4d0288441cf11f407222f006052c6d629bc11183a85cdc330cebadafd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315138144703821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d8lcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9e68bc58-e11b-4534-9164-eb1b115b1721,},Annotations:map[string]string{io.kubernetes.container.hash: efdf6691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315138066104909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022,PodSandboxId:66c48b48683c99d5068d56ed106df3e5f7f6e834aead734e1159392d47e68c67,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315131889306823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9463f414b3141e35d9e5ee6b8849a92,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8,PodSandboxId:b53717eff7abcea451cd24470987c6568f3df4e69937de8feb9778733f2b5018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315131394714539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585f9295812ba39422526be195c682df,},An
notations:map[string]string{io.kubernetes.container.hash: 5a6c0eb7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045,PodSandboxId:92fecba08bfb9f159db945e1f104c4da980603343ccc4338778f92db9d3ba87c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315131301555229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
57f9ebf45379653db2ca34fe521c184,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6,PodSandboxId:c8580dda7b40819e74bf6f95fae2d4961417c540cc83aae479676326a12da494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315131282069604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
af996f03f060971a07c47ab7207a249,},Annotations:map[string]string{io.kubernetes.container.hash: 93acb490,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2960618f-41bc-4bea-a831-882d1b9bc771 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.611896997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b67be84f-745a-49f4-b702-607f355468ca name=/runtime.v1.RuntimeService/Version
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.611953660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b67be84f-745a-49f4-b702-607f355468ca name=/runtime.v1.RuntimeService/Version
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.613197832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=57d862a1-1c27-4207-8595-ef2f97d0f608 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.613700972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316440613685649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=57d862a1-1c27-4207-8595-ef2f97d0f608 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.614207066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e50dc074-4b6d-4a9c-982a-89de17594887 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.614250249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e50dc074-4b6d-4a9c-982a-89de17594887 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 11:00:40 default-k8s-diff-port-709012 crio[712]: time="2024-01-15 11:00:40.614425683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315169264901792,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb769c7c010d829f4a23377df93c90e8bf1c5599a00fa995b9e52c91ccd0a71,PodSandboxId:872614188e424c68f8544d6d3b4d129e26a127481dfaa6e658f7b710a782fa06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315146835872252,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a87a22c-0769-4d2b-9e34-04682f1975ea,},Annotations:map[string]string{io.kubernetes.container.hash: c471276c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a,PodSandboxId:f6de38c7f39c76235b94888d1d6774b6bcbdccf73d0ea139d4c7b2afba9c0f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705315145691263405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dzd2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d078727-4275-4308-9206-b471ce7aa586,},Annotations:map[string]string{io.kubernetes.container.hash: c46c6fea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f,PodSandboxId:ee835a4d0288441cf11f407222f006052c6d629bc11183a85cdc330cebadafd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705315138144703821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d8lcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9e68bc58-e11b-4534-9164-eb1b115b1721,},Annotations:map[string]string{io.kubernetes.container.hash: efdf6691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9,PodSandboxId:9411d2b23ff86e1df50dc8d7612ef71e58bc254452e8428e187916c9a20a465a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705315138066104909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
a0c2885-50ff-40e4-bd6d-624f33f45c9c,},Annotations:map[string]string{io.kubernetes.container.hash: c7258d47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022,PodSandboxId:66c48b48683c99d5068d56ed106df3e5f7f6e834aead734e1159392d47e68c67,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705315131889306823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9463f414b3141e35d9e5ee6b8849a92,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8,PodSandboxId:b53717eff7abcea451cd24470987c6568f3df4e69937de8feb9778733f2b5018,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705315131394714539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585f9295812ba39422526be195c682df,},An
notations:map[string]string{io.kubernetes.container.hash: 5a6c0eb7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045,PodSandboxId:92fecba08bfb9f159db945e1f104c4da980603343ccc4338778f92db9d3ba87c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705315131301555229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
57f9ebf45379653db2ca34fe521c184,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6,PodSandboxId:c8580dda7b40819e74bf6f95fae2d4961417c540cc83aae479676326a12da494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705315131282069604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-709012,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
af996f03f060971a07c47ab7207a249,},Annotations:map[string]string{io.kubernetes.container.hash: 93acb490,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e50dc074-4b6d-4a9c-982a-89de17594887 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff6b807e1af7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       3                   9411d2b23ff86       storage-provisioner
	8fb769c7c010d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   872614188e424       busybox
	d7bf892409a21       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      21 minutes ago      Running             coredns                   1                   f6de38c7f39c7       coredns-5dd5756b68-dzd2f
	7836dc2548675       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      21 minutes ago      Running             kube-proxy                1                   ee835a4d02884       kube-proxy-d8lcq
	9af5ff2ded14a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       2                   9411d2b23ff86       storage-provisioner
	71abda814d83c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   66c48b48683c9       kube-scheduler-default-k8s-diff-port-709012
	16df79e79d4d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   b53717eff7abc       etcd-default-k8s-diff-port-709012
	5f5ae904a7af1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   92fecba08bfb9       kube-controller-manager-default-k8s-diff-port-709012
	9a14416fbd453       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   c8580dda7b408       kube-apiserver-default-k8s-diff-port-709012
	
	
	==> coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32938 - 8366 "HINFO IN 7933565490702889080.8938532282615614641. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028540689s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-709012
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-709012
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=default-k8s-diff-port-709012
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T10_31_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:31:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-709012
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 11:00:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:59:51 +0000   Mon, 15 Jan 2024 10:31:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:59:51 +0000   Mon, 15 Jan 2024 10:31:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:59:51 +0000   Mon, 15 Jan 2024 10:31:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:59:51 +0000   Mon, 15 Jan 2024 10:39:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    default-k8s-diff-port-709012
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 24585c0896e64350a08959541c747c05
	  System UUID:                24585c08-96e6-4350-a089-59541c747c05
	  Boot ID:                    977e9528-e135-4755-ab18-3d90ca37c59d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-dzd2f                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-709012                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-709012             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-709012    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-d8lcq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-709012             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-qpb25                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-709012 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-709012 event: Registered Node default-k8s-diff-port-709012 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-709012 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-709012 event: Registered Node default-k8s-diff-port-709012 in Controller
	
	
	==> dmesg <==
	[Jan15 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073941] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.645696] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.294636] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152078] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.642593] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.292277] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.141094] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.222371] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.153547] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.276154] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +17.661926] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[Jan15 10:39] kauditd_printk_skb: 19 callbacks suppressed
	[Jan15 11:00] hrtimer: interrupt took 2239373 ns
	
	
	==> etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] <==
	{"level":"info","ts":"2024-01-15T10:58:19.854905Z","caller":"traceutil/trace.go:171","msg":"trace[1890903237] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:1561; }","duration":"309.038797ms","start":"2024-01-15T10:58:19.545857Z","end":"2024-01-15T10:58:19.854896Z","steps":["trace[1890903237] 'agreement among raft nodes before linearized reading'  (duration: 308.887275ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:58:19.855015Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:58:19.545841Z","time spent":"309.164307ms","remote":"127.0.0.1:59236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":29,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true "}
	{"level":"info","ts":"2024-01-15T10:58:54.477732Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1345}
	{"level":"info","ts":"2024-01-15T10:58:54.479389Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1345,"took":"1.392149ms","hash":3343397816}
	{"level":"info","ts":"2024-01-15T10:58:54.479427Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3343397816,"revision":1345,"compact-revision":1102}
	{"level":"warn","ts":"2024-01-15T10:59:08.45102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.271851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:59:08.451198Z","caller":"traceutil/trace.go:171","msg":"trace[1590316837] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1599; }","duration":"127.454055ms","start":"2024-01-15T10:59:08.323722Z","end":"2024-01-15T10:59:08.451176Z","steps":["trace[1590316837] 'range keys from in-memory index tree'  (duration: 127.147715ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:59:09.21677Z","caller":"traceutil/trace.go:171","msg":"trace[1714128699] transaction","detail":"{read_only:false; response_revision:1600; number_of_response:1; }","duration":"135.719271ms","start":"2024-01-15T10:59:09.081028Z","end":"2024-01-15T10:59:09.216747Z","steps":["trace[1714128699] 'process raft request'  (duration: 94.701637ms)","trace[1714128699] 'compare'  (duration: 40.856916ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-15T10:59:11.253587Z","caller":"traceutil/trace.go:171","msg":"trace[487341408] transaction","detail":"{read_only:false; response_revision:1603; number_of_response:1; }","duration":"194.181293ms","start":"2024-01-15T10:59:11.059367Z","end":"2024-01-15T10:59:11.253549Z","steps":["trace[487341408] 'process raft request'  (duration: 193.868369ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:59:37.746736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.947771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2024-01-15T10:59:37.74707Z","caller":"traceutil/trace.go:171","msg":"trace[1206793983] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1623; }","duration":"127.422726ms","start":"2024-01-15T10:59:37.619627Z","end":"2024-01-15T10:59:37.747049Z","steps":["trace[1206793983] 'range keys from in-memory index tree'  (duration: 126.696156ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:59:41.843543Z","caller":"traceutil/trace.go:171","msg":"trace[1793120765] transaction","detail":"{read_only:false; response_revision:1627; number_of_response:1; }","duration":"125.370692ms","start":"2024-01-15T10:59:41.718136Z","end":"2024-01-15T10:59:41.843507Z","steps":["trace[1793120765] 'process raft request'  (duration: 125.161325ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:59:42.125708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.743416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:59:42.125845Z","caller":"traceutil/trace.go:171","msg":"trace[485491284] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1627; }","duration":"223.920865ms","start":"2024-01-15T10:59:41.901904Z","end":"2024-01-15T10:59:42.125825Z","steps":["trace[485491284] 'range keys from in-memory index tree'  (duration: 223.669149ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:59:42.671068Z","caller":"traceutil/trace.go:171","msg":"trace[2125487644] linearizableReadLoop","detail":"{readStateIndex:1923; appliedIndex:1922; }","duration":"143.041808ms","start":"2024-01-15T10:59:42.528013Z","end":"2024-01-15T10:59:42.671055Z","steps":["trace[2125487644] 'read index received'  (duration: 142.937831ms)","trace[2125487644] 'applied index is now lower than readState.Index'  (duration: 103.291µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:59:42.671194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.185562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-01-15T10:59:42.671223Z","caller":"traceutil/trace.go:171","msg":"trace[817544702] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1627; }","duration":"143.223853ms","start":"2024-01-15T10:59:42.527986Z","end":"2024-01-15T10:59:42.67121Z","steps":["trace[817544702] 'agreement among raft nodes before linearized reading'  (duration: 143.15727ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:59:43.188142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.003264ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:59:43.188911Z","caller":"traceutil/trace.go:171","msg":"trace[1204797812] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1627; }","duration":"167.764267ms","start":"2024-01-15T10:59:43.021098Z","end":"2024-01-15T10:59:43.188862Z","steps":["trace[1204797812] 'range keys from in-memory index tree'  (duration: 166.973257ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:59:43.188571Z","caller":"traceutil/trace.go:171","msg":"trace[1432957916] linearizableReadLoop","detail":"{readStateIndex:1924; appliedIndex:1923; }","duration":"286.190642ms","start":"2024-01-15T10:59:42.902365Z","end":"2024-01-15T10:59:43.188555Z","steps":["trace[1432957916] 'read index received'  (duration: 190.025156ms)","trace[1432957916] 'applied index is now lower than readState.Index'  (duration: 96.164769ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:59:43.188676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.323592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:59:43.189973Z","caller":"traceutil/trace.go:171","msg":"trace[1747005861] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1628; }","duration":"287.624072ms","start":"2024-01-15T10:59:42.902333Z","end":"2024-01-15T10:59:43.189957Z","steps":["trace[1747005861] 'agreement among raft nodes before linearized reading'  (duration: 286.270555ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:59:43.188802Z","caller":"traceutil/trace.go:171","msg":"trace[340303229] transaction","detail":"{read_only:false; response_revision:1628; number_of_response:1; }","duration":"513.85127ms","start":"2024-01-15T10:59:42.67494Z","end":"2024-01-15T10:59:43.188791Z","steps":["trace[340303229] 'process raft request'  (duration: 417.56392ms)","trace[340303229] 'compare'  (duration: 95.847968ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:59:43.19031Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:59:42.674925Z","time spent":"515.299724ms","remote":"127.0.0.1:59200","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1625 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-15T11:00:27.592839Z","caller":"traceutil/trace.go:171","msg":"trace[2117490491] transaction","detail":"{read_only:false; response_revision:1665; number_of_response:1; }","duration":"104.793286ms","start":"2024-01-15T11:00:27.488015Z","end":"2024-01-15T11:00:27.592808Z","steps":["trace[2117490491] 'process raft request'  (duration: 104.170765ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:00:40 up 22 min,  0 users,  load average: 0.16, 0.25, 0.18
	Linux default-k8s-diff-port-709012 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] <==
	Trace[1922790494]: [530.023348ms] [530.023348ms] END
	I0115 10:58:56.104294       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:58:56.302119       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:58:56.302294       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:58:56.303091       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:58:57.302533       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:58:57.302630       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:58:57.302687       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:58:57.302645       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:58:57.302802       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:58:57.304121       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:59:43.192180       1 trace.go:236] Trace[1788334253]: "Update" accept:application/json, */*,audit-id:6b0ccae0-5280-4687-ba3f-01e463180b39,client:192.168.39.125,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (15-Jan-2024 10:59:42.673) (total time: 518ms):
	Trace[1788334253]: ["GuaranteedUpdate etcd3" audit-id:6b0ccae0-5280-4687-ba3f-01e463180b39,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 518ms (10:59:42.673)
	Trace[1788334253]:  ---"Txn call completed" 516ms (10:59:43.191)]
	Trace[1788334253]: [518.220809ms] [518.220809ms] END
	I0115 10:59:56.104732       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 10:59:57.303385       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:59:57.303424       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:59:57.303511       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:59:57.304723       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:59:57.304808       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:59:57.304821       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] <==
	I0115 10:55:22.053237       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="156.353µs"
	E0115 10:55:39.677618       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:55:40.249918       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:56:09.683608       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:56:10.259093       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:56:39.692710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:56:40.270738       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:57:09.703503       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:57:10.281084       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:57:39.709779       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:57:40.290577       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:58:09.716976       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:58:10.303278       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:58:39.723687       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:58:40.315525       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:59:09.732667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:59:10.333685       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:59:39.740247       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:59:40.345123       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 11:00:09.747224       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 11:00:10.353886       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0115 11:00:13.057520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="332.185µs"
	I0115 11:00:26.058906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="265.057µs"
	E0115 11:00:39.760012       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 11:00:40.362308       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] <==
	I0115 10:38:58.407200       1 server_others.go:69] "Using iptables proxy"
	I0115 10:38:58.592635       1 node.go:141] Successfully retrieved node IP: 192.168.39.125
	I0115 10:38:58.647569       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 10:38:58.647656       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 10:38:58.651018       1 server_others.go:152] "Using iptables Proxier"
	I0115 10:38:58.651111       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 10:38:58.651630       1 server.go:846] "Version info" version="v1.28.4"
	I0115 10:38:58.651736       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:38:58.652834       1 config.go:188] "Starting service config controller"
	I0115 10:38:58.652895       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 10:38:58.652943       1 config.go:97] "Starting endpoint slice config controller"
	I0115 10:38:58.652966       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 10:38:58.654036       1 config.go:315] "Starting node config controller"
	I0115 10:38:58.654079       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 10:38:58.753686       1 shared_informer.go:318] Caches are synced for service config
	I0115 10:38:58.753711       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 10:38:58.754416       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] <==
	I0115 10:38:54.089947       1 serving.go:348] Generated self-signed cert in-memory
	W0115 10:38:56.207133       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0115 10:38:56.207264       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 10:38:56.207303       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0115 10:38:56.207328       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0115 10:38:56.305000       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0115 10:38:56.308357       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:38:56.325538       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0115 10:38:56.325593       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 10:38:56.335374       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0115 10:38:56.335542       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0115 10:38:56.427685       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 10:38:22 UTC, ends at Mon 2024-01-15 11:00:41 UTC. --
	Jan 15 10:58:02 default-k8s-diff-port-709012 kubelet[920]: E0115 10:58:02.034683     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:58:15 default-k8s-diff-port-709012 kubelet[920]: E0115 10:58:15.034286     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:58:30 default-k8s-diff-port-709012 kubelet[920]: E0115 10:58:30.034662     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:58:44 default-k8s-diff-port-709012 kubelet[920]: E0115 10:58:44.037748     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:58:50 default-k8s-diff-port-709012 kubelet[920]: E0115 10:58:50.059282     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:58:50 default-k8s-diff-port-709012 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:58:50 default-k8s-diff-port-709012 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:58:50 default-k8s-diff-port-709012 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:58:50 default-k8s-diff-port-709012 kubelet[920]: E0115 10:58:50.066732     920 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 15 10:58:55 default-k8s-diff-port-709012 kubelet[920]: E0115 10:58:55.033782     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:59:09 default-k8s-diff-port-709012 kubelet[920]: E0115 10:59:09.034415     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:59:22 default-k8s-diff-port-709012 kubelet[920]: E0115 10:59:22.035237     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:59:36 default-k8s-diff-port-709012 kubelet[920]: E0115 10:59:36.034849     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:59:48 default-k8s-diff-port-709012 kubelet[920]: E0115 10:59:48.034346     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 10:59:50 default-k8s-diff-port-709012 kubelet[920]: E0115 10:59:50.051079     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:59:50 default-k8s-diff-port-709012 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:59:50 default-k8s-diff-port-709012 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:59:50 default-k8s-diff-port-709012 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 11:00:02 default-k8s-diff-port-709012 kubelet[920]: E0115 11:00:02.047921     920 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 15 11:00:02 default-k8s-diff-port-709012 kubelet[920]: E0115 11:00:02.048006     920 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 15 11:00:02 default-k8s-diff-port-709012 kubelet[920]: E0115 11:00:02.048226     920 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5qllp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-qpb25_kube-system(3f101dc0-1411-4554-a46a-7d829f2345ad): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 11:00:02 default-k8s-diff-port-709012 kubelet[920]: E0115 11:00:02.048262     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 11:00:13 default-k8s-diff-port-709012 kubelet[920]: E0115 11:00:13.035315     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 11:00:26 default-k8s-diff-port-709012 kubelet[920]: E0115 11:00:26.036277     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	Jan 15 11:00:38 default-k8s-diff-port-709012 kubelet[920]: E0115 11:00:38.034832     920 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qpb25" podUID="3f101dc0-1411-4554-a46a-7d829f2345ad"
	
	
	==> storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] <==
	I0115 10:38:58.292863       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0115 10:39:28.295049       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] <==
	I0115 10:39:29.409147       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 10:39:29.426302       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 10:39:29.426379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 10:39:46.832680       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 10:39:46.835409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-709012_cc01d9de-fa0f-4c8d-9153-8cd977e0392d!
	I0115 10:39:46.835243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4df36283-0c04-4d23-ae3d-a2d9fc710156", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-709012_cc01d9de-fa0f-4c8d-9153-8cd977e0392d became leader
	I0115 10:39:46.936397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-709012_cc01d9de-fa0f-4c8d-9153-8cd977e0392d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-709012 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qpb25
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-709012 describe pod metrics-server-57f55c9bc5-qpb25
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-709012 describe pod metrics-server-57f55c9bc5-qpb25: exit status 1 (62.01459ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qpb25" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-709012 describe pod metrics-server-57f55c9bc5-qpb25: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (495.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (310.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0115 10:53:02.569880   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-824502 -n no-preload-824502
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-15 10:58:07.910853937 +0000 UTC m=+5499.882021355
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-824502 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-824502 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.136µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-824502 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824502 -n no-preload-824502
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-824502 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-824502 logs -n 25: (1.268927684s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-206509        | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-781270            | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-802186 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | disable-driver-mounts-802186                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:32 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-709012  | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-206509             | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-824502                  | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-781270                 | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:33 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-709012       | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC | 15 Jan 24 10:43 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:56 UTC | 15 Jan 24 10:56 UTC |
	| start   | -p newest-cni-273069 --memory=2200 --alsologtostderr   | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:56 UTC | 15 Jan 24 10:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-273069             | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC | 15 Jan 24 10:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-273069                                   | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC | 15 Jan 24 10:57 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-273069                  | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC | 15 Jan 24 10:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-273069 --memory=2200 --alsologtostderr   | newest-cni-273069            | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:57:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:57:50.226441   52070 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:57:50.226730   52070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:57:50.226742   52070 out.go:309] Setting ErrFile to fd 2...
	I0115 10:57:50.226750   52070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:57:50.226937   52070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:57:50.227443   52070 out.go:303] Setting JSON to false
	I0115 10:57:50.228382   52070 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5970,"bootTime":1705310300,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 10:57:50.228442   52070 start.go:138] virtualization: kvm guest
	I0115 10:57:50.230749   52070 out.go:177] * [newest-cni-273069] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 10:57:50.232192   52070 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 10:57:50.232239   52070 notify.go:220] Checking for updates...
	I0115 10:57:50.233508   52070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:57:50.234778   52070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:57:50.236078   52070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:57:50.237422   52070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 10:57:50.238716   52070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 10:57:50.240361   52070 config.go:182] Loaded profile config "newest-cni-273069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:57:50.240813   52070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:57:50.240857   52070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:57:50.254870   52070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I0115 10:57:50.255335   52070 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:57:50.255963   52070 main.go:141] libmachine: Using API Version  1
	I0115 10:57:50.256013   52070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:57:50.256395   52070 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:57:50.256604   52070 main.go:141] libmachine: (newest-cni-273069) Calling .DriverName
	I0115 10:57:50.256932   52070 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:57:50.257361   52070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:57:50.257400   52070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:57:50.271106   52070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I0115 10:57:50.271487   52070 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:57:50.271981   52070 main.go:141] libmachine: Using API Version  1
	I0115 10:57:50.272006   52070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:57:50.272392   52070 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:57:50.272581   52070 main.go:141] libmachine: (newest-cni-273069) Calling .DriverName
	I0115 10:57:50.309184   52070 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 10:57:50.310784   52070 start.go:298] selected driver: kvm2
	I0115 10:57:50.310797   52070 start.go:902] validating driver "kvm2" against &{Name:newest-cni-273069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-273069 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node
_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:57:50.310908   52070 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 10:57:50.311666   52070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:57:50.311730   52070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 10:57:50.325684   52070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 10:57:50.326039   52070 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0115 10:57:50.326100   52070 cni.go:84] Creating CNI manager for ""
	I0115 10:57:50.326115   52070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:57:50.326128   52070 start_flags.go:321] config:
	{Name:newest-cni-273069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-273069 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:57:50.326284   52070 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:57:50.328966   52070 out.go:177] * Starting control plane node newest-cni-273069 in cluster newest-cni-273069
	I0115 10:57:50.330690   52070 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 10:57:50.330721   52070 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0115 10:57:50.330733   52070 cache.go:56] Caching tarball of preloaded images
	I0115 10:57:50.330806   52070 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 10:57:50.330822   52070 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0115 10:57:50.330960   52070 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/newest-cni-273069/config.json ...
	I0115 10:57:50.331173   52070 start.go:365] acquiring machines lock for newest-cni-273069: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:57:50.331220   52070 start.go:369] acquired machines lock for "newest-cni-273069" in 28.052µs
	I0115 10:57:50.331239   52070 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:57:50.331245   52070 fix.go:54] fixHost starting: 
	I0115 10:57:50.331528   52070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:57:50.331561   52070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:57:50.344958   52070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I0115 10:57:50.345345   52070 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:57:50.345835   52070 main.go:141] libmachine: Using API Version  1
	I0115 10:57:50.345860   52070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:57:50.346174   52070 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:57:50.346401   52070 main.go:141] libmachine: (newest-cni-273069) Calling .DriverName
	I0115 10:57:50.346591   52070 main.go:141] libmachine: (newest-cni-273069) Calling .GetState
	I0115 10:57:50.348117   52070 fix.go:102] recreateIfNeeded on newest-cni-273069: state=Stopped err=<nil>
	I0115 10:57:50.348154   52070 main.go:141] libmachine: (newest-cni-273069) Calling .DriverName
	W0115 10:57:50.348297   52070 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:57:50.350200   52070 out.go:177] * Restarting existing kvm2 VM for "newest-cni-273069" ...
	I0115 10:57:50.351510   52070 main.go:141] libmachine: (newest-cni-273069) Calling .Start
	I0115 10:57:50.351681   52070 main.go:141] libmachine: (newest-cni-273069) Ensuring networks are active...
	I0115 10:57:50.352461   52070 main.go:141] libmachine: (newest-cni-273069) Ensuring network default is active
	I0115 10:57:50.352789   52070 main.go:141] libmachine: (newest-cni-273069) Ensuring network mk-newest-cni-273069 is active
	I0115 10:57:50.353135   52070 main.go:141] libmachine: (newest-cni-273069) Getting domain xml...
	I0115 10:57:50.353796   52070 main.go:141] libmachine: (newest-cni-273069) Creating domain...
	I0115 10:57:51.617607   52070 main.go:141] libmachine: (newest-cni-273069) Waiting to get IP...
	I0115 10:57:51.618752   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:51.619203   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:51.619306   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:51.619192   52104 retry.go:31] will retry after 199.864563ms: waiting for machine to come up
	I0115 10:57:51.820592   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:51.821040   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:51.821073   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:51.820984   52104 retry.go:31] will retry after 307.763402ms: waiting for machine to come up
	I0115 10:57:52.130505   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:52.131091   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:52.131116   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:52.131041   52104 retry.go:31] will retry after 444.679957ms: waiting for machine to come up
	I0115 10:57:52.577444   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:52.578007   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:52.578030   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:52.577950   52104 retry.go:31] will retry after 596.328601ms: waiting for machine to come up
	I0115 10:57:53.175380   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:53.175924   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:53.175962   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:53.175867   52104 retry.go:31] will retry after 595.727949ms: waiting for machine to come up
	I0115 10:57:53.773718   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:53.774187   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:53.774218   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:53.774113   52104 retry.go:31] will retry after 902.010921ms: waiting for machine to come up
	I0115 10:57:54.677408   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:54.677972   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:54.677997   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:54.677902   52104 retry.go:31] will retry after 1.06862082s: waiting for machine to come up
	I0115 10:57:55.748836   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:55.749373   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:55.749408   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:55.749310   52104 retry.go:31] will retry after 1.370082426s: waiting for machine to come up
	I0115 10:57:57.121768   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:57.122336   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:57.122368   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:57.122284   52104 retry.go:31] will retry after 1.755748936s: waiting for machine to come up
	I0115 10:57:58.880078   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:57:58.880608   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:57:58.880636   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:57:58.880560   52104 retry.go:31] will retry after 1.939229001s: waiting for machine to come up
	I0115 10:58:00.820986   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:00.821553   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:58:00.821591   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:58:00.821462   52104 retry.go:31] will retry after 2.359496424s: waiting for machine to come up
	I0115 10:58:03.183735   52070 main.go:141] libmachine: (newest-cni-273069) DBG | domain newest-cni-273069 has defined MAC address 52:54:00:8b:87:c9 in network mk-newest-cni-273069
	I0115 10:58:03.184347   52070 main.go:141] libmachine: (newest-cni-273069) DBG | unable to find current IP address of domain newest-cni-273069 in network mk-newest-cni-273069
	I0115 10:58:03.184380   52070 main.go:141] libmachine: (newest-cni-273069) DBG | I0115 10:58:03.184304   52104 retry.go:31] will retry after 3.099706815s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 10:38:43 UTC, ends at Mon 2024-01-15 10:58:08 UTC. --
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.603167879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316288603146712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=81414271-9049-4a2e-8dc2-102a2632c9ce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.603727029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c6d91a1a-f9ae-4ca7-b66d-95f467282647 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.603871553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c6d91a1a-f9ae-4ca7-b66d-95f467282647 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.604086190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705315202418432756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c658cb24796b95a8cdf4a506b265e1066ce03f741a8959b8a127df9c10370b1,PodSandboxId:d4e64526313335437437695aeaa86e72116b6e60fb962261f3cb3c8a5410465e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315179852234581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1219dd-88d2-4145-bdfe-b716393e8b47,},Annotations:map[string]string{io.kubernetes.container.hash: d4872e5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b,PodSandboxId:33e649279b1e7e2601085bf8a8c4b29c51f102075bdaa0685f7b86230144591b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705315178538095397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ft2wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217729a7-bdfa-452f-8df4-5a9694ad2f02,},Annotations:map[string]string{io.kubernetes.container.hash: 13f696c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2,PodSandboxId:568ce00da390ab1b10d4120ea4334dc03b3219718e53652f9b939e38289aa5ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705315171188029491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7aa7c9c-df
52-4073-a603-b283d123a230,},Annotations:map[string]string{io.kubernetes.container.hash: 9064c25e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705315171163870484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0
-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f,PodSandboxId:4b798a6d56bf7c0110d82f442a0dedc06335e8cf29c2d62226b8ac319cb71070,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705315164680400560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe902eda49d681254c2ad8c6e52376dd,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3eaaa882,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563,PodSandboxId:858c56ed12a283b930bc4434200d90e3d464cebdc3dc9766fa3470737e54e5bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705315164755646862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e9738fcb57d7e53d2a1c6d319c93db,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751,PodSandboxId:519d9ce32cc3c611e832afe27264ba3b5cd25f59e6b6d51c6b51d289e2d97ebf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705315164485696611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57ecfa2b1aac56d5c4a0f01bdad34f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8dc2e508,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6,PodSandboxId:baab48d4ddcef816e51c3455dfceb43c4d1d225b841266006dd55a66d7cdfddf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705315164264492139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d392d382c8dfcc2c2f98a184d7efd663,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c6d91a1a-f9ae-4ca7-b66d-95f467282647 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.640838762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c82d2016-fc7b-4a47-846c-abae5942b336 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.640920577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c82d2016-fc7b-4a47-846c-abae5942b336 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.648291783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7af924ac-75e3-4b8a-bca6-425e0c50babb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.648675596Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316288648659509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=7af924ac-75e3-4b8a-bca6-425e0c50babb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.649346515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c5fbb40a-8e94-4d57-bcba-a63f45658c25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.649398699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c5fbb40a-8e94-4d57-bcba-a63f45658c25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.649620105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705315202418432756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c658cb24796b95a8cdf4a506b265e1066ce03f741a8959b8a127df9c10370b1,PodSandboxId:d4e64526313335437437695aeaa86e72116b6e60fb962261f3cb3c8a5410465e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315179852234581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1219dd-88d2-4145-bdfe-b716393e8b47,},Annotations:map[string]string{io.kubernetes.container.hash: d4872e5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b,PodSandboxId:33e649279b1e7e2601085bf8a8c4b29c51f102075bdaa0685f7b86230144591b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705315178538095397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ft2wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217729a7-bdfa-452f-8df4-5a9694ad2f02,},Annotations:map[string]string{io.kubernetes.container.hash: 13f696c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2,PodSandboxId:568ce00da390ab1b10d4120ea4334dc03b3219718e53652f9b939e38289aa5ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705315171188029491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7aa7c9c-df
52-4073-a603-b283d123a230,},Annotations:map[string]string{io.kubernetes.container.hash: 9064c25e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705315171163870484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0
-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f,PodSandboxId:4b798a6d56bf7c0110d82f442a0dedc06335e8cf29c2d62226b8ac319cb71070,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705315164680400560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe902eda49d681254c2ad8c6e52376dd,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3eaaa882,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563,PodSandboxId:858c56ed12a283b930bc4434200d90e3d464cebdc3dc9766fa3470737e54e5bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705315164755646862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e9738fcb57d7e53d2a1c6d319c93db,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751,PodSandboxId:519d9ce32cc3c611e832afe27264ba3b5cd25f59e6b6d51c6b51d289e2d97ebf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705315164485696611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57ecfa2b1aac56d5c4a0f01bdad34f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8dc2e508,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6,PodSandboxId:baab48d4ddcef816e51c3455dfceb43c4d1d225b841266006dd55a66d7cdfddf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705315164264492139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d392d382c8dfcc2c2f98a184d7efd663,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c5fbb40a-8e94-4d57-bcba-a63f45658c25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.687907667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b84a8c81-d65d-4c7d-9acf-bbec2b438b1e name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.687962212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b84a8c81-d65d-4c7d-9acf-bbec2b438b1e name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.688987728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b4e4faf0-df4c-4de7-bfe5-2dcb2b80e8fc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.689291775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316288689281551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=b4e4faf0-df4c-4de7-bfe5-2dcb2b80e8fc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.689930094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=594fc5ee-a9da-4d2d-8ae7-a74d46cfc30c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.689979905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=594fc5ee-a9da-4d2d-8ae7-a74d46cfc30c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.690158511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705315202418432756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c658cb24796b95a8cdf4a506b265e1066ce03f741a8959b8a127df9c10370b1,PodSandboxId:d4e64526313335437437695aeaa86e72116b6e60fb962261f3cb3c8a5410465e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315179852234581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1219dd-88d2-4145-bdfe-b716393e8b47,},Annotations:map[string]string{io.kubernetes.container.hash: d4872e5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b,PodSandboxId:33e649279b1e7e2601085bf8a8c4b29c51f102075bdaa0685f7b86230144591b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705315178538095397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ft2wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217729a7-bdfa-452f-8df4-5a9694ad2f02,},Annotations:map[string]string{io.kubernetes.container.hash: 13f696c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2,PodSandboxId:568ce00da390ab1b10d4120ea4334dc03b3219718e53652f9b939e38289aa5ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705315171188029491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7aa7c9c-df
52-4073-a603-b283d123a230,},Annotations:map[string]string{io.kubernetes.container.hash: 9064c25e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705315171163870484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0
-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f,PodSandboxId:4b798a6d56bf7c0110d82f442a0dedc06335e8cf29c2d62226b8ac319cb71070,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705315164680400560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe902eda49d681254c2ad8c6e52376dd,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3eaaa882,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563,PodSandboxId:858c56ed12a283b930bc4434200d90e3d464cebdc3dc9766fa3470737e54e5bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705315164755646862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e9738fcb57d7e53d2a1c6d319c93db,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751,PodSandboxId:519d9ce32cc3c611e832afe27264ba3b5cd25f59e6b6d51c6b51d289e2d97ebf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705315164485696611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57ecfa2b1aac56d5c4a0f01bdad34f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8dc2e508,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6,PodSandboxId:baab48d4ddcef816e51c3455dfceb43c4d1d225b841266006dd55a66d7cdfddf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705315164264492139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d392d382c8dfcc2c2f98a184d7efd663,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=594fc5ee-a9da-4d2d-8ae7-a74d46cfc30c name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.737418193Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=28569b2c-9a91-451c-9189-01693253e2ae name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.737475763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=28569b2c-9a91-451c-9189-01693253e2ae name=/runtime.v1.RuntimeService/Version
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.738712357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a22eae3e-1fe0-480c-b536-b78a746517fb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.739112661Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316288739099159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a22eae3e-1fe0-480c-b536-b78a746517fb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.739967668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2a90838d-89c1-4aab-8d99-d24446a62167 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.740016141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2a90838d-89c1-4aab-8d99-d24446a62167 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:58:08 no-preload-824502 crio[729]: time="2024-01-15 10:58:08.740215780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705315202418432756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c658cb24796b95a8cdf4a506b265e1066ce03f741a8959b8a127df9c10370b1,PodSandboxId:d4e64526313335437437695aeaa86e72116b6e60fb962261f3cb3c8a5410465e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705315179852234581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1219dd-88d2-4145-bdfe-b716393e8b47,},Annotations:map[string]string{io.kubernetes.container.hash: d4872e5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b,PodSandboxId:33e649279b1e7e2601085bf8a8c4b29c51f102075bdaa0685f7b86230144591b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705315178538095397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ft2wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217729a7-bdfa-452f-8df4-5a9694ad2f02,},Annotations:map[string]string{io.kubernetes.container.hash: 13f696c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2,PodSandboxId:568ce00da390ab1b10d4120ea4334dc03b3219718e53652f9b939e38289aa5ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705315171188029491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7aa7c9c-df
52-4073-a603-b283d123a230,},Annotations:map[string]string{io.kubernetes.container.hash: 9064c25e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5,PodSandboxId:8570c1add81528b48b680fd32d25ada994686e09732d65b979e7445ad01feb2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705315171163870484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94d8b0f-d2b0
-4f57-9ab7-ff90a842499d,},Annotations:map[string]string{io.kubernetes.container.hash: df1833e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f,PodSandboxId:4b798a6d56bf7c0110d82f442a0dedc06335e8cf29c2d62226b8ac319cb71070,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705315164680400560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe902eda49d681254c2ad8c6e52376dd,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3eaaa882,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563,PodSandboxId:858c56ed12a283b930bc4434200d90e3d464cebdc3dc9766fa3470737e54e5bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705315164755646862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e9738fcb57d7e53d2a1c6d319c93db,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751,PodSandboxId:519d9ce32cc3c611e832afe27264ba3b5cd25f59e6b6d51c6b51d289e2d97ebf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705315164485696611,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57ecfa2b1aac56d5c4a0f01bdad34f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8dc2e508,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6,PodSandboxId:baab48d4ddcef816e51c3455dfceb43c4d1d225b841266006dd55a66d7cdfddf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705315164264492139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-824502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d392d382c8dfcc2c2f98a184d7efd663,},Annotations:map[string
]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2a90838d-89c1-4aab-8d99-d24446a62167 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	559a40ec4f19b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   8570c1add8152       storage-provisioner
	1c658cb24796b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   d4e6452631333       busybox
	014ec3fd018c5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   33e649279b1e7       coredns-76f75df574-ft2wt
	d1d6c3b6e1b4e       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      18 minutes ago      Running             kube-proxy                1                   568ce00da390a       kube-proxy-nlk2h
	9d1cf90048e83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       1                   8570c1add8152       storage-provisioner
	c382ae3f75656       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      18 minutes ago      Running             kube-scheduler            1                   858c56ed12a28       kube-scheduler-no-preload-824502
	0a1fe00474627       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      18 minutes ago      Running             etcd                      1                   4b798a6d56bf7       etcd-no-preload-824502
	04397ad49a123       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      18 minutes ago      Running             kube-apiserver            1                   519d9ce32cc3c       kube-apiserver-no-preload-824502
	aea55e3208ce8       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      18 minutes ago      Running             kube-controller-manager   1                   baab48d4ddcef       kube-controller-manager-no-preload-824502
	
	
	==> coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57947 - 27121 "HINFO IN 3732147130076988560.2592678263682650894. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029725305s
	
	
	==> describe nodes <==
	Name:               no-preload-824502
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-824502
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=no-preload-824502
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T10_29_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:29:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-824502
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 10:58:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:55:18 +0000   Mon, 15 Jan 2024 10:29:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:55:18 +0000   Mon, 15 Jan 2024 10:29:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:55:18 +0000   Mon, 15 Jan 2024 10:29:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:55:18 +0000   Mon, 15 Jan 2024 10:39:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.136
	  Hostname:    no-preload-824502
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 ed3417f43fb042f283634814d5ef2c19
	  System UUID:                ed3417f4-3fb0-42f2-8363-4814d5ef2c19
	  Boot ID:                    af76b30a-85fa-4e0a-abf3-71edc5159ff3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-76f75df574-ft2wt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-824502                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-824502             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-824502    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-nlk2h                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-824502             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-6tcwm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-824502 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-824502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-824502 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-824502 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-824502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-824502 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-824502 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-824502 event: Registered Node no-preload-824502 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-824502 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-824502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-824502 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-824502 event: Registered Node no-preload-824502 in Controller
	
	
	==> dmesg <==
	[Jan15 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070320] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.814482] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.583066] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143889] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.479804] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.085547] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.138900] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.155329] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.107687] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[  +0.248642] systemd-fstab-generator[714]: Ignoring "noauto" for root device
	[Jan15 10:39] systemd-fstab-generator[1348]: Ignoring "noauto" for root device
	[ +15.081212] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] <==
	{"level":"info","ts":"2024-01-15T10:49:28.109774Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":854,"took":"2.338955ms","hash":1953090962}
	{"level":"info","ts":"2024-01-15T10:49:28.109998Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1953090962,"revision":854,"compact-revision":-1}
	{"level":"info","ts":"2024-01-15T10:54:28.122033Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1096}
	{"level":"info","ts":"2024-01-15T10:54:28.124261Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1096,"took":"1.847692ms","hash":1619794337}
	{"level":"info","ts":"2024-01-15T10:54:28.124323Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1619794337,"revision":1096,"compact-revision":854}
	{"level":"info","ts":"2024-01-15T10:57:11.396433Z","caller":"traceutil/trace.go:171","msg":"trace[398429209] linearizableReadLoop","detail":"{readStateIndex:1727; appliedIndex:1727; }","duration":"387.721212ms","start":"2024-01-15T10:57:11.008652Z","end":"2024-01-15T10:57:11.396373Z","steps":["trace[398429209] 'read index received'  (duration: 387.714469ms)","trace[398429209] 'applied index is now lower than readState.Index'  (duration: 5.527µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:57:11.397762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.087132ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:57:11.397933Z","caller":"traceutil/trace.go:171","msg":"trace[900199786] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1471; }","duration":"389.328716ms","start":"2024-01-15T10:57:11.008581Z","end":"2024-01-15T10:57:11.39791Z","steps":["trace[900199786] 'agreement among raft nodes before linearized reading'  (duration: 388.179107ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:57:11.397974Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:57:11.008563Z","time spent":"389.403096ms","remote":"127.0.0.1:37274","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-01-15T10:57:11.396405Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:57:10.980766Z","time spent":"415.613202ms","remote":"127.0.0.1:37240","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-01-15T10:57:11.797299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"325.290284ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:57:11.797418Z","caller":"traceutil/trace.go:171","msg":"trace[1077312645] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1471; }","duration":"325.429834ms","start":"2024-01-15T10:57:11.47197Z","end":"2024-01-15T10:57:11.797399Z","steps":["trace[1077312645] 'range keys from in-memory index tree'  (duration: 325.259142ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:57:11.798325Z","caller":"traceutil/trace.go:171","msg":"trace[1408728314] transaction","detail":"{read_only:false; response_revision:1472; number_of_response:1; }","duration":"514.019427ms","start":"2024-01-15T10:57:11.284294Z","end":"2024-01-15T10:57:11.798314Z","steps":["trace[1408728314] 'process raft request'  (duration: 491.592705ms)","trace[1408728314] 'compare'  (duration: 21.493127ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:57:11.799215Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:57:11.284273Z","time spent":"514.169389ms","remote":"127.0.0.1:37292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-wioixhebpwcsqjhjvk33kxzwsy\" mod_revision:1464 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-wioixhebpwcsqjhjvk33kxzwsy\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-wioixhebpwcsqjhjvk33kxzwsy\" > >"}
	{"level":"info","ts":"2024-01-15T10:57:11.799683Z","caller":"traceutil/trace.go:171","msg":"trace[656859835] transaction","detail":"{read_only:false; response_revision:1473; number_of_response:1; }","duration":"397.711668ms","start":"2024-01-15T10:57:11.401959Z","end":"2024-01-15T10:57:11.799671Z","steps":["trace[656859835] 'process raft request'  (duration: 395.594847ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:57:11.799853Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:57:11.401944Z","time spent":"397.817008ms","remote":"127.0.0.1:37240","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":119,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.136\" mod_revision:1465 > success:<request_put:<key:\"/registry/masterleases/192.168.50.136\" value_size:67 lease:63486955465810529 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.136\" > >"}
	{"level":"info","ts":"2024-01-15T10:57:11.80039Z","caller":"traceutil/trace.go:171","msg":"trace[948818314] linearizableReadLoop","detail":"{readStateIndex:1728; appliedIndex:1727; }","duration":"403.63994ms","start":"2024-01-15T10:57:11.396741Z","end":"2024-01-15T10:57:11.800381Z","steps":["trace[948818314] 'read index received'  (duration: 379.154195ms)","trace[948818314] 'applied index is now lower than readState.Index'  (duration: 24.484457ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:57:11.800572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"783.667672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T10:57:11.800591Z","caller":"traceutil/trace.go:171","msg":"trace[1807593446] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1473; }","duration":"783.689683ms","start":"2024-01-15T10:57:11.016895Z","end":"2024-01-15T10:57:11.800585Z","steps":["trace[1807593446] 'agreement among raft nodes before linearized reading'  (duration: 783.568223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:57:11.800606Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T10:57:11.016877Z","time spent":"783.725252ms","remote":"127.0.0.1:37230","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-01-15T10:57:11.96561Z","caller":"traceutil/trace.go:171","msg":"trace[1452302753] transaction","detail":"{read_only:false; response_revision:1474; number_of_response:1; }","duration":"159.55897ms","start":"2024-01-15T10:57:11.806022Z","end":"2024-01-15T10:57:11.965581Z","steps":["trace[1452302753] 'process raft request'  (duration: 99.881638ms)","trace[1452302753] 'compare'  (duration: 59.508999ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-15T10:57:11.965923Z","caller":"traceutil/trace.go:171","msg":"trace[1640564477] transaction","detail":"{read_only:false; response_revision:1475; number_of_response:1; }","duration":"122.253468ms","start":"2024-01-15T10:57:11.843652Z","end":"2024-01-15T10:57:11.965906Z","steps":["trace[1640564477] 'process raft request'  (duration: 121.883538ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:57:11.966204Z","caller":"traceutil/trace.go:171","msg":"trace[132708660] linearizableReadLoop","detail":"{readStateIndex:1730; appliedIndex:1729; }","duration":"158.6618ms","start":"2024-01-15T10:57:11.807531Z","end":"2024-01-15T10:57:11.966193Z","steps":["trace[132708660] 'read index received'  (duration: 98.381898ms)","trace[132708660] 'applied index is now lower than readState.Index'  (duration: 60.278953ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:57:11.966349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.840502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:481"}
	{"level":"info","ts":"2024-01-15T10:57:11.968236Z","caller":"traceutil/trace.go:171","msg":"trace[855737275] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1475; }","duration":"160.476199ms","start":"2024-01-15T10:57:11.80749Z","end":"2024-01-15T10:57:11.967967Z","steps":["trace[855737275] 'agreement among raft nodes before linearized reading'  (duration: 158.813351ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:58:09 up 19 min,  0 users,  load average: 0.10, 0.15, 0.15
	Linux no-preload-824502 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] <==
	E0115 10:54:30.619699       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:54:30.620912       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:55:30.620643       1 handler_proxy.go:93] no RequestInfo found in the context
	W0115 10:55:30.621061       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:55:30.621072       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:55:30.621154       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0115 10:55:30.621202       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:55:30.622898       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:57:11.800417       1 trace.go:236] Trace[240674457]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:8c2d4942-fab1-4c68-b931-522b1697b28f,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:apiserver-wioixhebpwcsqjhjvk33kxzwsy,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-wioixhebpwcsqjhjvk33kxzwsy,user-agent:kube-apiserver/v1.29.0 (linux/amd64) kubernetes/e4636d0,verb:PUT (15-Jan-2024 10:57:11.283) (total time: 517ms):
	Trace[240674457]: ["GuaranteedUpdate etcd3" audit-id:8c2d4942-fab1-4c68-b931-522b1697b28f,key:/leases/kube-system/apiserver-wioixhebpwcsqjhjvk33kxzwsy,type:*coordination.Lease,resource:leases.coordination.k8s.io 517ms (10:57:11.283)
	Trace[240674457]:  ---"Txn call completed" 516ms (10:57:11.800)]
	Trace[240674457]: [517.336667ms] [517.336667ms] END
	I0115 10:57:11.801240       1 trace.go:236] Trace[1023884687]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.136,type:*v1.Endpoints,resource:apiServerIPInfo (15-Jan-2024 10:57:10.979) (total time: 821ms):
	Trace[1023884687]: ---"Transaction prepared" 419ms (10:57:11.400)
	Trace[1023884687]: ---"Txn call completed" 400ms (10:57:11.801)
	Trace[1023884687]: [821.890471ms] [821.890471ms] END
	W0115 10:57:30.621667       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:57:30.621985       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0115 10:57:30.622020       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0115 10:57:30.623377       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 10:57:30.623509       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:57:30.623541       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] <==
	I0115 10:52:13.525118       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:52:42.946104       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:52:43.533704       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:53:12.952951       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:53:13.542011       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:53:42.959585       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:53:43.553494       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:54:12.965246       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:54:13.562071       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:54:42.970947       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:54:43.571171       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:55:12.977017       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:55:13.580732       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:55:42.982885       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:55:43.589054       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0115 10:55:48.232685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="219.279µs"
	I0115 10:56:03.235882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="248.846µs"
	E0115 10:56:12.991767       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:56:13.598552       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:56:43.000324       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:56:43.608642       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:57:13.007078       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:57:13.618073       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0115 10:57:43.012933       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0115 10:57:43.626127       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] <==
	I0115 10:39:31.713743       1 server_others.go:72] "Using iptables proxy"
	I0115 10:39:31.739400       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.136"]
	I0115 10:39:31.801266       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0115 10:39:31.801354       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 10:39:31.801392       1 server_others.go:168] "Using iptables Proxier"
	I0115 10:39:31.804862       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 10:39:31.805071       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0115 10:39:31.805122       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:39:31.808052       1 config.go:188] "Starting service config controller"
	I0115 10:39:31.808198       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 10:39:31.808300       1 config.go:97] "Starting endpoint slice config controller"
	I0115 10:39:31.808327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 10:39:31.811355       1 config.go:315] "Starting node config controller"
	I0115 10:39:31.811475       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 10:39:31.908532       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 10:39:31.908949       1 shared_informer.go:318] Caches are synced for service config
	I0115 10:39:31.912488       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] <==
	I0115 10:39:26.806562       1 serving.go:380] Generated self-signed cert in-memory
	W0115 10:39:29.562316       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0115 10:39:29.562371       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 10:39:29.562381       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0115 10:39:29.562387       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0115 10:39:29.620685       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0115 10:39:29.621071       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:39:29.623162       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0115 10:39:29.623245       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 10:39:29.624399       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0115 10:39:29.624568       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0115 10:39:29.723865       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 10:38:43 UTC, ends at Mon 2024-01-15 10:58:09 UTC. --
	Jan 15 10:55:23 no-preload-824502 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:55:23 no-preload-824502 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:55:36 no-preload-824502 kubelet[1354]: E0115 10:55:36.227403    1354 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 15 10:55:36 no-preload-824502 kubelet[1354]: E0115 10:55:36.227443    1354 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 15 10:55:36 no-preload-824502 kubelet[1354]: E0115 10:55:36.227616    1354 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bn4mh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-6tcwm_kube-system(1815c2ae-e5ce-4c79-9fd9-79b28c2c6780): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 10:55:36 no-preload-824502 kubelet[1354]: E0115 10:55:36.227658    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:55:48 no-preload-824502 kubelet[1354]: E0115 10:55:48.215410    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:56:03 no-preload-824502 kubelet[1354]: E0115 10:56:03.215942    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:56:17 no-preload-824502 kubelet[1354]: E0115 10:56:17.216883    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:56:23 no-preload-824502 kubelet[1354]: E0115 10:56:23.231980    1354 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:56:23 no-preload-824502 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:56:23 no-preload-824502 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:56:23 no-preload-824502 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:56:31 no-preload-824502 kubelet[1354]: E0115 10:56:31.216009    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:56:42 no-preload-824502 kubelet[1354]: E0115 10:56:42.215577    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:56:57 no-preload-824502 kubelet[1354]: E0115 10:56:57.217459    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:57:08 no-preload-824502 kubelet[1354]: E0115 10:57:08.215727    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:57:22 no-preload-824502 kubelet[1354]: E0115 10:57:22.216097    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:57:23 no-preload-824502 kubelet[1354]: E0115 10:57:23.233692    1354 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 15 10:57:23 no-preload-824502 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 15 10:57:23 no-preload-824502 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 15 10:57:23 no-preload-824502 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 15 10:57:35 no-preload-824502 kubelet[1354]: E0115 10:57:35.216056    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:57:49 no-preload-824502 kubelet[1354]: E0115 10:57:49.217094    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	Jan 15 10:58:04 no-preload-824502 kubelet[1354]: E0115 10:58:04.215712    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6tcwm" podUID="1815c2ae-e5ce-4c79-9fd9-79b28c2c6780"
	
	
	==> storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] <==
	I0115 10:40:02.552269       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 10:40:02.571460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 10:40:02.572271       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 10:40:19.981115       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 10:40:19.981710       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-824502_7d20f209-d460-4749-900a-e7a118d3bbea!
	I0115 10:40:19.983394       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f9e99c98-1144-4bc5-bfe0-057dc2bb715e", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-824502_7d20f209-d460-4749-900a-e7a118d3bbea became leader
	I0115 10:40:20.084175       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-824502_7d20f209-d460-4749-900a-e7a118d3bbea!
	
	
	==> storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] <==
	I0115 10:39:31.554605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0115 10:40:01.567540       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-824502 -n no-preload-824502
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-824502 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6tcwm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-824502 describe pod metrics-server-57f55c9bc5-6tcwm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-824502 describe pod metrics-server-57f55c9bc5-6tcwm: exit status 1 (70.685234ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6tcwm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-824502 describe pod metrics-server-57f55c9bc5-6tcwm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (310.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (165.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0115 10:54:12.883609   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:54:21.453247   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-206509 -n old-k8s-version-206509
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-15 10:56:33.373991184 +0000 UTC m=+5405.345158603
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-206509 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-206509 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.3µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-206509 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206509 -n old-k8s-version-206509
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-206509 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-206509 logs -n 25: (1.639499483s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-967423 -- sudo                         | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-967423                                 | cert-options-967423          | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-317803                           | kubernetes-upgrade-317803    | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:28 UTC |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:28 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-824502             | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-206509        | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-781270            | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-252810                              | cert-expiration-252810       | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-802186 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:30 UTC |
	|         | disable-driver-mounts-802186                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:30 UTC | 15 Jan 24 10:32 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-709012  | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-206509             | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-824502                  | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-206509                              | old-k8s-version-206509       | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-824502                                   | no-preload-824502            | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-781270                 | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-781270                                  | embed-certs-781270           | jenkins | v1.32.0 | 15 Jan 24 10:33 UTC | 15 Jan 24 10:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-709012       | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-709012 | jenkins | v1.32.0 | 15 Jan 24 10:34 UTC | 15 Jan 24 10:43 UTC |
	|         | default-k8s-diff-port-709012                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:34:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:34:59.863813   47063 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:34:59.864093   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864103   47063 out.go:309] Setting ErrFile to fd 2...
	I0115 10:34:59.864108   47063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:34:59.864345   47063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:34:59.864916   47063 out.go:303] Setting JSON to false
	I0115 10:34:59.865821   47063 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4600,"bootTime":1705310300,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 10:34:59.865878   47063 start.go:138] virtualization: kvm guest
	I0115 10:34:59.868392   47063 out.go:177] * [default-k8s-diff-port-709012] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 10:34:59.869886   47063 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 10:34:59.869920   47063 notify.go:220] Checking for updates...
	I0115 10:34:59.871289   47063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:34:59.872699   47063 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:34:59.874242   47063 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:34:59.875739   47063 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 10:34:59.877248   47063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 10:34:59.879143   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:34:59.879618   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.879682   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.893745   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I0115 10:34:59.894091   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.894610   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.894633   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.894933   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.895112   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.895305   47063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:34:59.895579   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:34:59.895611   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:34:59.909045   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0115 10:34:59.909415   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:34:59.909868   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:34:59.909886   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:34:59.910173   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:34:59.910346   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:34:59.943453   47063 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 10:34:59.945154   47063 start.go:298] selected driver: kvm2
	I0115 10:34:59.945164   47063 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.945252   47063 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 10:34:59.945926   47063 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.945991   47063 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 10:34:59.959656   47063 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 10:34:59.960028   47063 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 10:34:59.960078   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:34:59.960091   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:34:59.960106   47063 start_flags.go:321] config:
	{Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-70901
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:34:59.960261   47063 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:34:59.962534   47063 out.go:177] * Starting control plane node default-k8s-diff-port-709012 in cluster default-k8s-diff-port-709012
	I0115 10:35:00.734685   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:34:59.963970   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:34:59.964003   47063 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 10:34:59.964012   47063 cache.go:56] Caching tarball of preloaded images
	I0115 10:34:59.964081   47063 preload.go:174] Found /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 10:34:59.964090   47063 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:34:59.964172   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:34:59.964356   47063 start.go:365] acquiring machines lock for default-k8s-diff-port-709012: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:35:06.814638   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:09.886665   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:15.966704   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:19.038663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:25.118649   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:28.190674   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:34.270660   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:37.342618   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:43.422663   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:46.494729   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:52.574698   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:35:55.646737   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:01.726677   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:04.798681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:10.878645   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:13.950716   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:20.030691   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:23.102681   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:29.182668   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:32.254641   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:38.334686   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:41.406690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:47.486639   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:50.558690   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:56.638684   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:36:59.710581   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:05.790664   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:08.862738   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:14.942615   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:18.014720   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:24.094644   46388 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.136:22: connect: no route to host
	I0115 10:37:27.098209   46387 start.go:369] acquired machines lock for "old-k8s-version-206509" in 4m37.373222591s
	I0115 10:37:27.098259   46387 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:27.098264   46387 fix.go:54] fixHost starting: 
	I0115 10:37:27.098603   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:27.098633   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:27.112818   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37153
	I0115 10:37:27.113206   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:27.113638   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:37:27.113660   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:27.113943   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:27.114126   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:27.114270   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:37:27.115824   46387 fix.go:102] recreateIfNeeded on old-k8s-version-206509: state=Stopped err=<nil>
	I0115 10:37:27.115846   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	W0115 10:37:27.116007   46387 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:27.118584   46387 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-206509" ...
	I0115 10:37:27.119985   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Start
	I0115 10:37:27.120145   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring networks are active...
	I0115 10:37:27.120788   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network default is active
	I0115 10:37:27.121077   46387 main.go:141] libmachine: (old-k8s-version-206509) Ensuring network mk-old-k8s-version-206509 is active
	I0115 10:37:27.121463   46387 main.go:141] libmachine: (old-k8s-version-206509) Getting domain xml...
	I0115 10:37:27.122185   46387 main.go:141] libmachine: (old-k8s-version-206509) Creating domain...
	I0115 10:37:28.295990   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting to get IP...
	I0115 10:37:28.297038   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.297393   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.297470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.297380   47440 retry.go:31] will retry after 254.616903ms: waiting for machine to come up
	I0115 10:37:28.553730   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.554213   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.554238   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.554159   47440 retry.go:31] will retry after 350.995955ms: waiting for machine to come up
	I0115 10:37:28.906750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:28.907189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:28.907222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:28.907146   47440 retry.go:31] will retry after 441.292217ms: waiting for machine to come up
	I0115 10:37:29.349643   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.350011   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.350042   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.349959   47440 retry.go:31] will retry after 544.431106ms: waiting for machine to come up
	I0115 10:37:27.096269   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:27.096303   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:37:27.098084   46388 machine.go:91] provisioned docker machine in 4m37.366643974s
	I0115 10:37:27.098120   46388 fix.go:56] fixHost completed within 4m37.388460167s
	I0115 10:37:27.098126   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 4m37.388479036s
	W0115 10:37:27.098153   46388 start.go:694] error starting host: provision: host is not running
	W0115 10:37:27.098242   46388 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0115 10:37:27.098252   46388 start.go:709] Will try again in 5 seconds ...
	I0115 10:37:29.895609   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:29.896157   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:29.896189   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:29.896032   47440 retry.go:31] will retry after 489.420436ms: waiting for machine to come up
	I0115 10:37:30.386614   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:30.387037   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:30.387071   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:30.387005   47440 retry.go:31] will retry after 779.227065ms: waiting for machine to come up
	I0115 10:37:31.167934   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:31.168316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:31.168343   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:31.168273   47440 retry.go:31] will retry after 878.328646ms: waiting for machine to come up
	I0115 10:37:32.048590   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:32.048976   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:32.049001   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:32.048920   47440 retry.go:31] will retry after 1.282650862s: waiting for machine to come up
	I0115 10:37:33.333699   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:33.334132   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:33.334161   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:33.334078   47440 retry.go:31] will retry after 1.548948038s: waiting for machine to come up
	I0115 10:37:32.100253   46388 start.go:365] acquiring machines lock for no-preload-824502: {Name:mkb704dd0d53537445b6cba85feeb782a28bf39e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 10:37:34.884455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:34.884845   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:34.884866   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:34.884800   47440 retry.go:31] will retry after 1.555315627s: waiting for machine to come up
	I0115 10:37:36.441833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:36.442329   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:36.442352   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:36.442281   47440 retry.go:31] will retry after 1.803564402s: waiting for machine to come up
	I0115 10:37:38.247833   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:38.248241   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:38.248283   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:38.248213   47440 retry.go:31] will retry after 3.514521425s: waiting for machine to come up
	I0115 10:37:41.766883   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:41.767187   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | unable to find current IP address of domain old-k8s-version-206509 in network mk-old-k8s-version-206509
	I0115 10:37:41.767222   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | I0115 10:37:41.767154   47440 retry.go:31] will retry after 4.349871716s: waiting for machine to come up
	I0115 10:37:47.571869   46584 start.go:369] acquired machines lock for "embed-certs-781270" in 4m40.757219204s
	I0115 10:37:47.571928   46584 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:37:47.571936   46584 fix.go:54] fixHost starting: 
	I0115 10:37:47.572344   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:37:47.572382   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:37:47.591532   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0115 10:37:47.591905   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:37:47.592471   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:37:47.592513   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:37:47.592835   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:37:47.593060   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:37:47.593221   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:37:47.594825   46584 fix.go:102] recreateIfNeeded on embed-certs-781270: state=Stopped err=<nil>
	I0115 10:37:47.594856   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	W0115 10:37:47.595015   46584 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:37:47.597457   46584 out.go:177] * Restarting existing kvm2 VM for "embed-certs-781270" ...
	I0115 10:37:46.118479   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.118936   46387 main.go:141] libmachine: (old-k8s-version-206509) Found IP for machine: 192.168.61.70
	I0115 10:37:46.118960   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserving static IP address...
	I0115 10:37:46.118978   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has current primary IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.119402   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.119425   46387 main.go:141] libmachine: (old-k8s-version-206509) Reserved static IP address: 192.168.61.70
	I0115 10:37:46.119441   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | skip adding static IP to network mk-old-k8s-version-206509 - found existing host DHCP lease matching {name: "old-k8s-version-206509", mac: "52:54:00:b7:7f:eb", ip: "192.168.61.70"}
	I0115 10:37:46.119455   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Getting to WaitForSSH function...
	I0115 10:37:46.119467   46387 main.go:141] libmachine: (old-k8s-version-206509) Waiting for SSH to be available...
	I0115 10:37:46.121874   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122204   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.122236   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.122340   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH client type: external
	I0115 10:37:46.122364   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa (-rw-------)
	I0115 10:37:46.122452   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:37:46.122476   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | About to run SSH command:
	I0115 10:37:46.122492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | exit 0
	I0115 10:37:46.214102   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | SSH cmd err, output: <nil>: 
	I0115 10:37:46.214482   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetConfigRaw
	I0115 10:37:46.215064   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.217294   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217579   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.217618   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.217784   46387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/config.json ...
	I0115 10:37:46.218001   46387 machine.go:88] provisioning docker machine ...
	I0115 10:37:46.218022   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:46.218242   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218440   46387 buildroot.go:166] provisioning hostname "old-k8s-version-206509"
	I0115 10:37:46.218462   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.218593   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.220842   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221188   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.221226   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.221374   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.221525   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221662   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.221760   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.221905   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.222391   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.222411   46387 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-206509 && echo "old-k8s-version-206509" | sudo tee /etc/hostname
	I0115 10:37:46.354906   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-206509
	
	I0115 10:37:46.354939   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.357679   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358051   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.358089   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.358245   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.358470   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358642   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.358799   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.358957   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.359291   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.359318   46387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-206509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-206509/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-206509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:37:46.491369   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:37:46.491397   46387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:37:46.491413   46387 buildroot.go:174] setting up certificates
	I0115 10:37:46.491422   46387 provision.go:83] configureAuth start
	I0115 10:37:46.491430   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetMachineName
	I0115 10:37:46.491687   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:46.494369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494750   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.494779   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.494863   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.496985   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497338   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.497368   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.497537   46387 provision.go:138] copyHostCerts
	I0115 10:37:46.497598   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:37:46.497613   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:37:46.497694   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:37:46.497806   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:37:46.497818   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:37:46.497848   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:37:46.497925   46387 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:37:46.497945   46387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:37:46.497982   46387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:37:46.498043   46387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-206509 san=[192.168.61.70 192.168.61.70 localhost 127.0.0.1 minikube old-k8s-version-206509]
	I0115 10:37:46.824648   46387 provision.go:172] copyRemoteCerts
	I0115 10:37:46.824702   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:37:46.824723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.827470   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827785   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.827818   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.827972   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.828174   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.828336   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.828484   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:46.919822   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:37:46.941728   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:37:46.963042   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0115 10:37:46.983757   46387 provision.go:86] duration metric: configureAuth took 492.325875ms
	I0115 10:37:46.983777   46387 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:37:46.983966   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:37:46.984048   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:46.986525   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.986843   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:46.986869   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:46.987107   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:46.987323   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987503   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:46.987651   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:46.987795   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:46.988198   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:46.988219   46387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:37:47.308225   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:37:47.308256   46387 machine.go:91] provisioned docker machine in 1.090242192s
	I0115 10:37:47.308269   46387 start.go:300] post-start starting for "old-k8s-version-206509" (driver="kvm2")
	I0115 10:37:47.308284   46387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:37:47.308310   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.308641   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:37:47.308674   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.311316   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311665   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.311700   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.311835   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.312024   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.312190   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.312315   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.407169   46387 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:37:47.411485   46387 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:37:47.411504   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:37:47.411566   46387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:37:47.411637   46387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:37:47.411715   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:37:47.419976   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:47.446992   46387 start.go:303] post-start completed in 138.700951ms
	I0115 10:37:47.447013   46387 fix.go:56] fixHost completed within 20.348748891s
	I0115 10:37:47.447031   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.449638   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.449996   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.450048   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.450136   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.450309   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.450620   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.450749   46387 main.go:141] libmachine: Using SSH client type: native
	I0115 10:37:47.451070   46387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I0115 10:37:47.451085   46387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:37:47.571711   46387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315067.520557177
	
	I0115 10:37:47.571729   46387 fix.go:206] guest clock: 1705315067.520557177
	I0115 10:37:47.571748   46387 fix.go:219] Guest: 2024-01-15 10:37:47.520557177 +0000 UTC Remote: 2024-01-15 10:37:47.447016864 +0000 UTC m=+297.904172196 (delta=73.540313ms)
	I0115 10:37:47.571772   46387 fix.go:190] guest clock delta is within tolerance: 73.540313ms
	I0115 10:37:47.571782   46387 start.go:83] releasing machines lock for "old-k8s-version-206509", held for 20.473537585s
	I0115 10:37:47.571810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.572157   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:47.574952   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575328   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.575366   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.575490   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.575957   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576146   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:37:47.576232   46387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:37:47.576273   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.576381   46387 ssh_runner.go:195] Run: cat /version.json
	I0115 10:37:47.576406   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:37:47.578863   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579052   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579218   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579248   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579347   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:47.579378   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:47.579385   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579577   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:37:47.579583   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579775   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.579810   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:37:47.579912   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.580094   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:37:47.580316   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:37:47.702555   46387 ssh_runner.go:195] Run: systemctl --version
	I0115 10:37:47.708309   46387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:37:47.862103   46387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:37:47.869243   46387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:37:47.869321   46387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:37:47.886013   46387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:37:47.886033   46387 start.go:475] detecting cgroup driver to use...
	I0115 10:37:47.886093   46387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:37:47.901265   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:37:47.913762   46387 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:37:47.913815   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:37:47.926880   46387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:37:47.942744   46387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:37:48.050667   46387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:37:48.168614   46387 docker.go:233] disabling docker service ...
	I0115 10:37:48.168679   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:37:48.181541   46387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:37:48.193155   46387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:37:48.312374   46387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:37:48.420624   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:37:48.432803   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:37:48.449232   46387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0115 10:37:48.449292   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.458042   46387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:37:48.458109   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.466909   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.475511   46387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:37:48.484081   46387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:37:48.493186   46387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:37:48.502460   46387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:37:48.502507   46387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:37:48.514913   46387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:37:48.522816   46387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:37:48.630774   46387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:37:48.807089   46387 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:37:48.807170   46387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:37:48.812950   46387 start.go:543] Will wait 60s for crictl version
	I0115 10:37:48.813005   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:48.816919   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:37:48.860058   46387 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:37:48.860143   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.916839   46387 ssh_runner.go:195] Run: crio --version
	I0115 10:37:48.968312   46387 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0115 10:37:48.969913   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetIP
	I0115 10:37:48.972776   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973219   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:37:48.973249   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:37:48.973519   46387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0115 10:37:48.977593   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:48.990551   46387 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 10:37:48.990613   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:49.030917   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:49.030973   46387 ssh_runner.go:195] Run: which lz4
	I0115 10:37:49.035059   46387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:37:49.039231   46387 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:37:49.039262   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0115 10:37:47.598904   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Start
	I0115 10:37:47.599102   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring networks are active...
	I0115 10:37:47.599886   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network default is active
	I0115 10:37:47.600258   46584 main.go:141] libmachine: (embed-certs-781270) Ensuring network mk-embed-certs-781270 is active
	I0115 10:37:47.600652   46584 main.go:141] libmachine: (embed-certs-781270) Getting domain xml...
	I0115 10:37:47.601365   46584 main.go:141] libmachine: (embed-certs-781270) Creating domain...
	I0115 10:37:48.842510   46584 main.go:141] libmachine: (embed-certs-781270) Waiting to get IP...
	I0115 10:37:48.843267   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:48.843637   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:48.843731   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:48.843603   47574 retry.go:31] will retry after 262.69562ms: waiting for machine to come up
	I0115 10:37:49.108361   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.108861   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.108901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.108796   47574 retry.go:31] will retry after 379.820541ms: waiting for machine to come up
	I0115 10:37:49.490343   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.490939   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.490979   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.490898   47574 retry.go:31] will retry after 463.282743ms: waiting for machine to come up
	I0115 10:37:49.956222   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:49.956694   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:49.956725   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:49.956646   47574 retry.go:31] will retry after 539.780461ms: waiting for machine to come up
	I0115 10:37:50.498391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:50.498901   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:50.498935   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:50.498849   47574 retry.go:31] will retry after 611.580301ms: waiting for machine to come up
	I0115 10:37:51.111752   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.112228   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.112263   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.112194   47574 retry.go:31] will retry after 837.335782ms: waiting for machine to come up
	I0115 10:37:50.824399   46387 crio.go:444] Took 1.789376 seconds to copy over tarball
	I0115 10:37:50.824466   46387 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:37:53.837707   46387 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013210203s)
	I0115 10:37:53.837742   46387 crio.go:451] Took 3.013322 seconds to extract the tarball
	I0115 10:37:53.837753   46387 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:37:53.876939   46387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:37:53.922125   46387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0115 10:37:53.922161   46387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:37:53.922213   46387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:53.922249   46387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.922267   46387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.922300   46387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.922520   46387 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.922527   46387 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.922544   46387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.922547   46387 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:53.923794   46387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:53.923809   46387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:53.923811   46387 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:53.923807   46387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:53.923785   46387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:53.923843   46387 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0115 10:37:53.923780   46387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.083650   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0115 10:37:54.090328   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.095213   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.123642   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.124012   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:37:54.139399   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.139406   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.207117   46387 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0115 10:37:54.207170   46387 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0115 10:37:54.207168   46387 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0115 10:37:54.207202   46387 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.207230   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.207248   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.248774   46387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.269586   46387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0115 10:37:54.269636   46387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.269661   46387 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0115 10:37:54.269693   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.269693   46387 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.269785   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404758   46387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0115 10:37:54.404862   46387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0115 10:37:54.404907   46387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.404969   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0115 10:37:54.404996   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404873   46387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.405034   46387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0115 10:37:54.405064   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.404975   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0115 10:37:54.405082   46387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.405174   46387 ssh_runner.go:195] Run: which crictl
	I0115 10:37:54.405202   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0115 10:37:54.405149   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0115 10:37:54.502357   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0115 10:37:54.502402   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0115 10:37:54.502507   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0115 10:37:54.502547   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0115 10:37:54.502504   46387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0115 10:37:54.502620   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0115 10:37:54.510689   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0115 10:37:54.577797   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0115 10:37:54.577854   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0115 10:37:54.577885   46387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0115 10:37:54.577945   46387 cache_images.go:92] LoadImages completed in 655.770059ms
	W0115 10:37:54.578019   46387 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0115 10:37:54.578091   46387 ssh_runner.go:195] Run: crio config
	I0115 10:37:51.950759   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:51.951289   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:51.951322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:51.951237   47574 retry.go:31] will retry after 817.063291ms: waiting for machine to come up
	I0115 10:37:52.770506   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:52.771015   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:52.771043   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:52.770977   47574 retry.go:31] will retry after 1.000852987s: waiting for machine to come up
	I0115 10:37:53.774011   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:53.774478   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:53.774518   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:53.774452   47574 retry.go:31] will retry after 1.171113667s: waiting for machine to come up
	I0115 10:37:54.947562   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:54.947925   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:54.947951   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:54.947887   47574 retry.go:31] will retry after 1.982035367s: waiting for machine to come up
	I0115 10:37:54.646104   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:37:54.750728   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:37:54.750754   46387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:37:54.750779   46387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-206509 NodeName:old-k8s-version-206509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 10:37:54.750935   46387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-206509"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-206509
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:37:54.751014   46387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-206509 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:37:54.751063   46387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0115 10:37:54.761568   46387 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:37:54.761645   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:37:54.771892   46387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0115 10:37:54.788678   46387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:37:54.804170   46387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0115 10:37:54.820285   46387 ssh_runner.go:195] Run: grep 192.168.61.70	control-plane.minikube.internal$ /etc/hosts
	I0115 10:37:54.823831   46387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:37:54.834806   46387 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509 for IP: 192.168.61.70
	I0115 10:37:54.834838   46387 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:37:54.835023   46387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:37:54.835070   46387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:37:54.835136   46387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/client.key
	I0115 10:37:54.835190   46387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key.99472042
	I0115 10:37:54.835249   46387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key
	I0115 10:37:54.835356   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:37:54.835392   46387 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:37:54.835401   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:37:54.835439   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:37:54.835467   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:37:54.835491   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:37:54.835531   46387 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:37:54.836204   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:37:54.859160   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 10:37:54.884674   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:37:54.907573   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:37:54.930846   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:37:54.953329   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:37:54.975335   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:37:54.997505   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:37:55.020494   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:37:55.042745   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:37:55.064085   46387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:37:55.085243   46387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:37:55.101189   46387 ssh_runner.go:195] Run: openssl version
	I0115 10:37:55.106849   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:37:55.118631   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123477   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.123545   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:37:55.129290   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:37:55.141464   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:37:55.153514   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157901   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.157967   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:37:55.163557   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:37:55.173419   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:37:55.184850   46387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189454   46387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.189508   46387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:37:55.194731   46387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:37:55.205634   46387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:37:55.209881   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:37:55.215521   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:37:55.221031   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:37:55.226730   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:37:55.232566   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:37:55.238251   46387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:37:55.244098   46387 kubeadm.go:404] StartCluster: {Name:old-k8s-version-206509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-206509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:37:55.244188   46387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:37:55.244243   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:37:55.293223   46387 cri.go:89] found id: ""
	I0115 10:37:55.293296   46387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:37:55.305374   46387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:37:55.305403   46387 kubeadm.go:636] restartCluster start
	I0115 10:37:55.305477   46387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:37:55.314925   46387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.316564   46387 kubeconfig.go:92] found "old-k8s-version-206509" server: "https://192.168.61.70:8443"
	I0115 10:37:55.319961   46387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:37:55.329062   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.329148   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.340866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:55.829433   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:55.829549   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:55.843797   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.329336   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.329436   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.343947   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.829507   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:56.829623   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:56.843692   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.329438   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.329522   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.341416   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:57.830063   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:57.830153   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:57.844137   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.329648   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.329743   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.342211   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:58.829792   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:58.829891   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:58.842397   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:59.330122   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.330202   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.346667   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:37:56.931004   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:56.931428   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:56.931461   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:56.931364   47574 retry.go:31] will retry after 2.358737657s: waiting for machine to come up
	I0115 10:37:59.292322   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:37:59.292784   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:37:59.292817   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:37:59.292726   47574 retry.go:31] will retry after 2.808616591s: waiting for machine to come up
	I0115 10:37:59.829162   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:37:59.829242   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:37:59.844148   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.329799   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.329901   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.345118   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:00.829706   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:00.829806   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:00.845105   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.329598   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.329678   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.341872   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:01.829350   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:01.829424   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:01.843987   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.329874   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.329944   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.342152   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.829617   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:02.829711   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:02.841636   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.329206   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.329306   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.341373   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:03.829987   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:03.830080   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:03.842151   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:04.329957   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.330047   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.342133   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:02.103667   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:02.104098   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:02.104127   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:02.104058   47574 retry.go:31] will retry after 2.823867183s: waiting for machine to come up
	I0115 10:38:04.931219   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:04.931550   46584 main.go:141] libmachine: (embed-certs-781270) DBG | unable to find current IP address of domain embed-certs-781270 in network mk-embed-certs-781270
	I0115 10:38:04.931594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | I0115 10:38:04.931523   47574 retry.go:31] will retry after 4.042933854s: waiting for machine to come up
	I0115 10:38:04.829477   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:04.829599   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:04.841546   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.329351   46387 api_server.go:166] Checking apiserver status ...
	I0115 10:38:05.329417   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:05.341866   46387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:05.341892   46387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:05.341900   46387 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:05.341910   46387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:05.342037   46387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:05.376142   46387 cri.go:89] found id: ""
	I0115 10:38:05.376206   46387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:05.391778   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:05.402262   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:05.402331   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411457   46387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:05.411489   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:05.526442   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.239898   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.449098   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.515862   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:06.598545   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:06.598653   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.099595   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:07.599677   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.099492   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.599629   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:08.627737   46387 api_server.go:72] duration metric: took 2.029196375s to wait for apiserver process to appear ...
	I0115 10:38:08.627766   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:08.627803   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.199201   47063 start.go:369] acquired machines lock for "default-k8s-diff-port-709012" in 3m10.23481312s
	I0115 10:38:10.199261   47063 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:10.199269   47063 fix.go:54] fixHost starting: 
	I0115 10:38:10.199630   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:10.199667   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:10.215225   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0115 10:38:10.215627   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:10.216040   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:10.216068   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:10.216372   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:10.216583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:10.216829   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:10.218454   47063 fix.go:102] recreateIfNeeded on default-k8s-diff-port-709012: state=Stopped err=<nil>
	I0115 10:38:10.218482   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	W0115 10:38:10.218676   47063 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:10.220860   47063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-709012" ...
	I0115 10:38:08.976035   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976545   46584 main.go:141] libmachine: (embed-certs-781270) Found IP for machine: 192.168.72.222
	I0115 10:38:08.976574   46584 main.go:141] libmachine: (embed-certs-781270) Reserving static IP address...
	I0115 10:38:08.976592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has current primary IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.976946   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.976980   46584 main.go:141] libmachine: (embed-certs-781270) DBG | skip adding static IP to network mk-embed-certs-781270 - found existing host DHCP lease matching {name: "embed-certs-781270", mac: "52:54:00:58:6d:ca", ip: "192.168.72.222"}
	I0115 10:38:08.976997   46584 main.go:141] libmachine: (embed-certs-781270) Reserved static IP address: 192.168.72.222
	I0115 10:38:08.977017   46584 main.go:141] libmachine: (embed-certs-781270) Waiting for SSH to be available...
	I0115 10:38:08.977033   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Getting to WaitForSSH function...
	I0115 10:38:08.979155   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979456   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:08.979483   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:08.979609   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH client type: external
	I0115 10:38:08.979658   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa (-rw-------)
	I0115 10:38:08.979699   46584 main.go:141] libmachine: (embed-certs-781270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:08.979718   46584 main.go:141] libmachine: (embed-certs-781270) DBG | About to run SSH command:
	I0115 10:38:08.979734   46584 main.go:141] libmachine: (embed-certs-781270) DBG | exit 0
	I0115 10:38:09.082171   46584 main.go:141] libmachine: (embed-certs-781270) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:09.082546   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetConfigRaw
	I0115 10:38:09.083235   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.085481   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.085845   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.085873   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.086115   46584 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/config.json ...
	I0115 10:38:09.086309   46584 machine.go:88] provisioning docker machine ...
	I0115 10:38:09.086331   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.086549   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086714   46584 buildroot.go:166] provisioning hostname "embed-certs-781270"
	I0115 10:38:09.086736   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.086884   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.089346   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089702   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.089727   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.089866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.090035   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090180   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.090319   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.090464   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.090845   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.090862   46584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-781270 && echo "embed-certs-781270" | sudo tee /etc/hostname
	I0115 10:38:09.240609   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-781270
	
	I0115 10:38:09.240643   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.243233   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243586   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.243616   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.243764   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.243976   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.244292   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.244453   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.244774   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.244800   46584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-781270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-781270/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-781270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:09.388902   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:09.388932   46584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:09.388968   46584 buildroot.go:174] setting up certificates
	I0115 10:38:09.388981   46584 provision.go:83] configureAuth start
	I0115 10:38:09.388998   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetMachineName
	I0115 10:38:09.389254   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:09.392236   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392603   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.392643   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.392750   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.395249   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395596   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.395629   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.395797   46584 provision.go:138] copyHostCerts
	I0115 10:38:09.395858   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:09.395872   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:09.395939   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:09.396037   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:09.396045   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:09.396067   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:09.396134   46584 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:09.396141   46584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:09.396159   46584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:09.396212   46584 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.embed-certs-781270 san=[192.168.72.222 192.168.72.222 localhost 127.0.0.1 minikube embed-certs-781270]
	I0115 10:38:09.457000   46584 provision.go:172] copyRemoteCerts
	I0115 10:38:09.457059   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:09.457081   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.459709   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460074   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.460102   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.460356   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.460522   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.460681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.460798   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:09.556211   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:09.578947   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:09.601191   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:09.623814   46584 provision.go:86] duration metric: configureAuth took 234.815643ms
	I0115 10:38:09.623844   46584 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:09.624070   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:09.624157   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.626592   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.626930   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.626972   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.627141   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.627326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627492   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.627607   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.627755   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:09.628058   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:09.628086   46584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:09.931727   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:09.931765   46584 machine.go:91] provisioned docker machine in 845.442044ms
	I0115 10:38:09.931777   46584 start.go:300] post-start starting for "embed-certs-781270" (driver="kvm2")
	I0115 10:38:09.931790   46584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:09.931810   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:09.932100   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:09.932130   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:09.934487   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934811   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:09.934836   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:09.934999   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:09.935160   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:09.935313   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:09.935480   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.028971   46584 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:10.032848   46584 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:10.032871   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:10.032955   46584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:10.033045   46584 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:10.033162   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:10.042133   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:10.064619   46584 start.go:303] post-start completed in 132.827155ms
	I0115 10:38:10.064658   46584 fix.go:56] fixHost completed within 22.492708172s
	I0115 10:38:10.064681   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.067323   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067651   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.067675   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.067812   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.068037   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068272   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.068449   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.068587   46584 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:10.068904   46584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0115 10:38:10.068919   46584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:10.199025   46584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315090.148648598
	
	I0115 10:38:10.199045   46584 fix.go:206] guest clock: 1705315090.148648598
	I0115 10:38:10.199053   46584 fix.go:219] Guest: 2024-01-15 10:38:10.148648598 +0000 UTC Remote: 2024-01-15 10:38:10.064662616 +0000 UTC m=+303.401739583 (delta=83.985982ms)
	I0115 10:38:10.199088   46584 fix.go:190] guest clock delta is within tolerance: 83.985982ms
	I0115 10:38:10.199096   46584 start.go:83] releasing machines lock for "embed-certs-781270", held for 22.627192785s
	I0115 10:38:10.199122   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.199368   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:10.201962   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202349   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.202389   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.202603   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203326   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:10.203417   46584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:10.203461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.203546   46584 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:10.203570   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:10.206022   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206257   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206371   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206400   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.206673   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:10.206700   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:10.206768   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.206910   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.206911   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:10.207087   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.207191   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:10.207335   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:10.207465   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:10.327677   46584 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:10.333127   46584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:10.473183   46584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:10.480054   46584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:10.480115   46584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:10.494367   46584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:10.494388   46584 start.go:475] detecting cgroup driver to use...
	I0115 10:38:10.494463   46584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:10.508327   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:10.519950   46584 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:10.520003   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:10.531743   46584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:10.544980   46584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:10.650002   46584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:10.767145   46584 docker.go:233] disabling docker service ...
	I0115 10:38:10.767214   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:10.782073   46584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:10.796419   46584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:10.913422   46584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:11.016113   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:11.032638   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:11.053360   46584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:11.053415   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.064008   46584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:11.064067   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.074353   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.084486   46584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:11.093962   46584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:11.105487   46584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:11.117411   46584 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:11.117469   46584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:11.133780   46584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:11.145607   46584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:11.257012   46584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:11.437979   46584 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:11.438050   46584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:11.445814   46584 start.go:543] Will wait 60s for crictl version
	I0115 10:38:11.445896   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:38:11.449770   46584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:11.491895   46584 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:11.491985   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.543656   46584 ssh_runner.go:195] Run: crio --version
	I0115 10:38:11.609733   46584 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:11.611238   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetIP
	I0115 10:38:11.614594   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.614947   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:11.614988   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:11.615225   46584 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:11.619516   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:11.635101   46584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:11.635170   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:11.675417   46584 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:11.675504   46584 ssh_runner.go:195] Run: which lz4
	I0115 10:38:11.679733   46584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:11.683858   46584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:11.683889   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:13.628977   46387 api_server.go:269] stopped: https://192.168.61.70:8443/healthz: Get "https://192.168.61.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0115 10:38:13.629022   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:10.222501   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Start
	I0115 10:38:10.222694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring networks are active...
	I0115 10:38:10.223335   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network default is active
	I0115 10:38:10.225164   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Ensuring network mk-default-k8s-diff-port-709012 is active
	I0115 10:38:10.225189   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Getting domain xml...
	I0115 10:38:10.225201   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Creating domain...
	I0115 10:38:11.529205   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting to get IP...
	I0115 10:38:11.530265   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530808   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.530886   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.530786   47689 retry.go:31] will retry after 220.836003ms: waiting for machine to come up
	I0115 10:38:11.753500   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754152   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:11.754183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:11.754119   47689 retry.go:31] will retry after 288.710195ms: waiting for machine to come up
	I0115 10:38:12.044613   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045149   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.045179   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.045065   47689 retry.go:31] will retry after 321.962888ms: waiting for machine to come up
	I0115 10:38:12.368694   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369119   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.369171   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.369075   47689 retry.go:31] will retry after 457.128837ms: waiting for machine to come up
	I0115 10:38:12.827574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828079   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:12.828108   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:12.828011   47689 retry.go:31] will retry after 524.042929ms: waiting for machine to come up
	I0115 10:38:13.353733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354288   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:13.354315   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:13.354237   47689 retry.go:31] will retry after 885.937378ms: waiting for machine to come up
	I0115 10:38:14.241653   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242258   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:14.242293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:14.242185   47689 retry.go:31] will retry after 1.168061338s: waiting for machine to come up
	I0115 10:38:14.984346   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:14.984377   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:14.984395   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.129596   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:15.129627   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:15.129650   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.224825   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.224852   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:15.628377   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:15.666573   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0115 10:38:15.666642   46387 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0115 10:38:16.128080   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:38:16.148642   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:38:16.156904   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:38:16.156927   46387 api_server.go:131] duration metric: took 7.529154555s to wait for apiserver health ...
	I0115 10:38:16.156936   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:38:16.156942   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:16.159248   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:13.665699   46584 crio.go:444] Took 1.986003 seconds to copy over tarball
	I0115 10:38:13.665769   46584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:16.702911   46584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.037102789s)
	I0115 10:38:16.702954   46584 crio.go:451] Took 3.037230 seconds to extract the tarball
	I0115 10:38:16.702966   46584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:16.160810   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:16.173072   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:16.205009   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:16.216599   46387 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:16.216637   46387 system_pods.go:61] "coredns-5644d7b6d9-5qcrz" [3fc31c2b-9c3f-4167-8b3f-bbe262591a90] Running
	I0115 10:38:16.216645   46387 system_pods.go:61] "coredns-5644d7b6d9-rgrbc" [1c2c2a33-f329-4cb3-8e05-900a252ceed3] Running
	I0115 10:38:16.216651   46387 system_pods.go:61] "etcd-old-k8s-version-206509" [8c2919cc-4b82-4387-be0d-f3decf4b324b] Running
	I0115 10:38:16.216658   46387 system_pods.go:61] "kube-apiserver-old-k8s-version-206509" [51e63cf2-5728-471d-b447-3f3aa9454ac7] Running
	I0115 10:38:16.216663   46387 system_pods.go:61] "kube-controller-manager-old-k8s-version-206509" [6dec6bf0-ce5d-4f87-8bf7-c774214eb8ea] Running
	I0115 10:38:16.216668   46387 system_pods.go:61] "kube-proxy-w9fdn" [42b28054-8876-4854-a041-62be5688c1c2] Running
	I0115 10:38:16.216675   46387 system_pods.go:61] "kube-scheduler-old-k8s-version-206509" [7a50352c-2129-4de4-84e8-3cb5d8ccd463] Running
	I0115 10:38:16.216681   46387 system_pods.go:61] "storage-provisioner" [f341413b-8261-4a78-9f28-449be173cf19] Running
	I0115 10:38:16.216690   46387 system_pods.go:74] duration metric: took 11.655731ms to wait for pod list to return data ...
	I0115 10:38:16.216703   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:16.220923   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:16.220962   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:16.220978   46387 node_conditions.go:105] duration metric: took 4.267954ms to run NodePressure ...
	I0115 10:38:16.221005   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:16.519042   46387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:16.523772   46387 retry.go:31] will retry after 264.775555ms: kubelet not initialised
	I0115 10:38:17.172203   46387 retry.go:31] will retry after 553.077445ms: kubelet not initialised
	I0115 10:38:18.053202   46387 retry.go:31] will retry after 653.279352ms: kubelet not initialised
	I0115 10:38:18.837753   46387 retry.go:31] will retry after 692.673954ms: kubelet not initialised
	I0115 10:38:19.596427   46387 retry.go:31] will retry after 679.581071ms: kubelet not initialised
	I0115 10:38:15.412204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412706   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:15.412766   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:15.412670   47689 retry.go:31] will retry after 895.041379ms: waiting for machine to come up
	I0115 10:38:16.309188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309733   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:16.309764   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:16.309692   47689 retry.go:31] will retry after 1.593821509s: waiting for machine to come up
	I0115 10:38:17.904625   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905131   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:17.905168   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:17.905073   47689 retry.go:31] will retry after 2.002505122s: waiting for machine to come up
	I0115 10:38:16.745093   46584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:17.184204   46584 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:17.184235   46584 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:17.184325   46584 ssh_runner.go:195] Run: crio config
	I0115 10:38:17.249723   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:17.249748   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:17.249764   46584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:17.249782   46584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.222 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-781270 NodeName:embed-certs-781270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:17.249936   46584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-781270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:17.250027   46584 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-781270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:38:17.250091   46584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:17.262237   46584 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:17.262313   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:17.273370   46584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0115 10:38:17.292789   46584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:17.312254   46584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0115 10:38:17.332121   46584 ssh_runner.go:195] Run: grep 192.168.72.222	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:17.336199   46584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:17.349009   46584 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270 for IP: 192.168.72.222
	I0115 10:38:17.349047   46584 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:17.349200   46584 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:17.349246   46584 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:17.349316   46584 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/client.key
	I0115 10:38:17.685781   46584 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key.4e007618
	I0115 10:38:17.685874   46584 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key
	I0115 10:38:17.685990   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:17.686022   46584 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:17.686033   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:17.686054   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:17.686085   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:17.686107   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:17.686147   46584 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:17.686866   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:17.713652   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:17.744128   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:17.771998   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/embed-certs-781270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:17.796880   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:17.822291   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:17.848429   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:17.874193   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:17.898873   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:17.922742   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:17.945123   46584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:17.967188   46584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:17.983237   46584 ssh_runner.go:195] Run: openssl version
	I0115 10:38:17.988658   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:17.998141   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002462   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.002521   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:18.008136   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:18.017766   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:18.027687   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032418   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.032479   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:18.038349   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:18.048395   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:18.058675   46584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063369   46584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.063441   46584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:18.068886   46584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:18.078459   46584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:18.083181   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:18.089264   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:18.095399   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:18.101292   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:18.107113   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:18.112791   46584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:18.118337   46584 kubeadm.go:404] StartCluster: {Name:embed-certs-781270 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-781270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:18.118561   46584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:18.118611   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:18.162363   46584 cri.go:89] found id: ""
	I0115 10:38:18.162454   46584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:18.172261   46584 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:18.172286   46584 kubeadm.go:636] restartCluster start
	I0115 10:38:18.172357   46584 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:18.181043   46584 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.182845   46584 kubeconfig.go:92] found "embed-certs-781270" server: "https://192.168.72.222:8443"
	I0115 10:38:18.186506   46584 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:18.194997   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.195069   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.205576   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:18.695105   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:18.695200   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:18.709836   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.195362   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.195533   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.210585   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:19.695088   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:19.695201   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:19.710436   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.196063   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.196145   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.211948   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.695433   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:20.695545   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:20.710981   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.195510   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.195588   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.206769   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:21.695111   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:21.695192   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:21.706765   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:20.288898   46387 retry.go:31] will retry after 1.97886626s: kubelet not initialised
	I0115 10:38:22.273756   46387 retry.go:31] will retry after 2.35083465s: kubelet not initialised
	I0115 10:38:19.909015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909598   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:19.909629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:19.909539   47689 retry.go:31] will retry after 2.883430325s: waiting for machine to come up
	I0115 10:38:22.794280   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794702   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:22.794729   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:22.794660   47689 retry.go:31] will retry after 3.219865103s: waiting for machine to come up
	I0115 10:38:22.195343   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.195454   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.210740   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:22.695835   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:22.695900   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:22.710247   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.195555   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.195633   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.207117   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:23.695569   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:23.695632   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:23.706867   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.195323   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.195428   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.207679   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.695971   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:24.696049   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:24.708342   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.195900   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.195994   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.207896   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:25.695417   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:25.695490   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:25.706180   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.195799   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.195890   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.206859   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:26.695558   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:26.695648   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:26.706652   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:24.630486   46387 retry.go:31] will retry after 5.638904534s: kubelet not initialised
	I0115 10:38:26.016121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016496   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | unable to find current IP address of domain default-k8s-diff-port-709012 in network mk-default-k8s-diff-port-709012
	I0115 10:38:26.016520   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | I0115 10:38:26.016463   47689 retry.go:31] will retry after 3.426285557s: waiting for machine to come up
	I0115 10:38:29.447165   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447643   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Found IP for machine: 192.168.39.125
	I0115 10:38:29.447678   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has current primary IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.447719   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserving static IP address...
	I0115 10:38:29.448146   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.448172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | skip adding static IP to network mk-default-k8s-diff-port-709012 - found existing host DHCP lease matching {name: "default-k8s-diff-port-709012", mac: "52:54:00:fd:83:1c", ip: "192.168.39.125"}
	I0115 10:38:29.448183   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Reserved static IP address: 192.168.39.125
	I0115 10:38:29.448204   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Waiting for SSH to be available...
	I0115 10:38:29.448215   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Getting to WaitForSSH function...
	I0115 10:38:29.450376   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450690   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.450715   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.450835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH client type: external
	I0115 10:38:29.450867   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa (-rw-------)
	I0115 10:38:29.450899   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:29.450909   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | About to run SSH command:
	I0115 10:38:29.450919   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | exit 0
	I0115 10:38:29.550560   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:29.550940   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetConfigRaw
	I0115 10:38:29.551686   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.554629   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555085   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.555117   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.555426   47063 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/config.json ...
	I0115 10:38:29.555642   47063 machine.go:88] provisioning docker machine ...
	I0115 10:38:29.555672   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:29.555875   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556053   47063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-709012"
	I0115 10:38:29.556076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.556217   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.558493   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.558804   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.558835   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.559018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.559209   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559363   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.559516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.559677   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.560009   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.560028   47063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-709012 && echo "default-k8s-diff-port-709012" | sudo tee /etc/hostname
	I0115 10:38:29.706028   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-709012
	
	I0115 10:38:29.706059   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.708893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.709343   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.709409   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.709631   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709789   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.709938   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.710121   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:29.710473   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:29.710501   47063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-709012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-709012/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-709012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:29.845884   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:29.845916   47063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:29.845938   47063 buildroot.go:174] setting up certificates
	I0115 10:38:29.845953   47063 provision.go:83] configureAuth start
	I0115 10:38:29.845973   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetMachineName
	I0115 10:38:29.846293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:29.849072   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849516   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.849558   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.849755   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.852196   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852548   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.852574   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.852664   47063 provision.go:138] copyHostCerts
	I0115 10:38:29.852716   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:29.852726   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:29.852778   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:29.852870   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:29.852877   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:29.852896   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:29.852957   47063 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:29.852964   47063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:29.852981   47063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:29.853031   47063 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-709012 san=[192.168.39.125 192.168.39.125 localhost 127.0.0.1 minikube default-k8s-diff-port-709012]
	I0115 10:38:30.777181   46388 start.go:369] acquired machines lock for "no-preload-824502" in 58.676870352s
	I0115 10:38:30.777252   46388 start.go:96] Skipping create...Using existing machine configuration
	I0115 10:38:30.777263   46388 fix.go:54] fixHost starting: 
	I0115 10:38:30.777697   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:30.777733   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:30.795556   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0115 10:38:30.795931   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:30.796387   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:38:30.796417   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:30.796825   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:30.797001   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:30.797164   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:38:30.798953   46388 fix.go:102] recreateIfNeeded on no-preload-824502: state=Stopped err=<nil>
	I0115 10:38:30.798978   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	W0115 10:38:30.799146   46388 fix.go:128] unexpected machine state, will restart: <nil>
	I0115 10:38:30.800981   46388 out.go:177] * Restarting existing kvm2 VM for "no-preload-824502" ...
	I0115 10:38:27.195033   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.195128   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.205968   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:27.695992   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:27.696075   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:27.707112   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.195726   46584 api_server.go:166] Checking apiserver status ...
	I0115 10:38:28.195798   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:28.206794   46584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:28.206837   46584 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:28.206846   46584 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:28.206858   46584 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:28.206917   46584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:28.256399   46584 cri.go:89] found id: ""
	I0115 10:38:28.256468   46584 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:28.272234   46584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:28.281359   46584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:28.281439   46584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290385   46584 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:28.290431   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:28.417681   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.012673   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.212322   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.296161   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:29.378870   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:29.378965   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.879587   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.379077   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:30.879281   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:31.379626   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:29.951966   47063 provision.go:172] copyRemoteCerts
	I0115 10:38:29.952019   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:29.952040   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:29.954784   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955082   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:29.955104   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:29.955285   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:29.955466   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:29.955649   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:29.955793   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.057077   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:30.081541   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0115 10:38:30.109962   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:38:30.140809   47063 provision.go:86] duration metric: configureAuth took 294.836045ms
	I0115 10:38:30.140840   47063 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:30.141071   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:30.141167   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.144633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.144975   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.145015   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.145177   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.145378   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145539   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.145703   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.145927   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.146287   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.146310   47063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:30.484993   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:30.485022   47063 machine.go:91] provisioned docker machine in 929.358403ms
	I0115 10:38:30.485035   47063 start.go:300] post-start starting for "default-k8s-diff-port-709012" (driver="kvm2")
	I0115 10:38:30.485049   47063 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:30.485067   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.485390   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:30.485431   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.488115   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488473   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.488512   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.488633   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.488837   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.489018   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.489171   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.590174   47063 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:30.594879   47063 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:30.594907   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:30.594974   47063 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:30.595069   47063 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:30.595183   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:30.604525   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:30.631240   47063 start.go:303] post-start completed in 146.190685ms
	I0115 10:38:30.631270   47063 fix.go:56] fixHost completed within 20.431996373s
	I0115 10:38:30.631293   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.634188   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634544   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.634577   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.634807   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.635014   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635185   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.635367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.635574   47063 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:30.636012   47063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0115 10:38:30.636032   47063 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:30.777043   47063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315110.724251584
	
	I0115 10:38:30.777069   47063 fix.go:206] guest clock: 1705315110.724251584
	I0115 10:38:30.777079   47063 fix.go:219] Guest: 2024-01-15 10:38:30.724251584 +0000 UTC Remote: 2024-01-15 10:38:30.631274763 +0000 UTC m=+210.817197544 (delta=92.976821ms)
	I0115 10:38:30.777107   47063 fix.go:190] guest clock delta is within tolerance: 92.976821ms
	I0115 10:38:30.777114   47063 start.go:83] releasing machines lock for "default-k8s-diff-port-709012", held for 20.577876265s
	I0115 10:38:30.777143   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.777406   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:30.780611   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781041   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.781076   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.781250   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.781876   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:30.782186   47063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:30.782240   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.782295   47063 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:30.782321   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:30.785597   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786228   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.786255   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786386   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.786698   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.786881   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.787023   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.787078   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:30.787095   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:30.787204   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.787774   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:30.787930   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:30.788121   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:30.788345   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:30.919659   47063 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:30.926237   47063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:31.076313   47063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:31.085010   47063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:31.085087   47063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:31.104237   47063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:31.104265   47063 start.go:475] detecting cgroup driver to use...
	I0115 10:38:31.104331   47063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:31.124044   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:31.139494   47063 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:31.139581   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:31.154894   47063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:31.172458   47063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:31.307400   47063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:31.496675   47063 docker.go:233] disabling docker service ...
	I0115 10:38:31.496733   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:31.513632   47063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:31.526228   47063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:31.681556   47063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:31.816489   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:31.831193   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:31.853530   47063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:31.853602   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.864559   47063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:31.864661   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.875384   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.888460   47063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:31.904536   47063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:31.915622   47063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:31.929209   47063 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:31.929266   47063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:31.948691   47063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:31.959872   47063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:32.102988   47063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:32.300557   47063 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:32.300632   47063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:32.305636   47063 start.go:543] Will wait 60s for crictl version
	I0115 10:38:32.305691   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:38:32.309883   47063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:32.354459   47063 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:32.354594   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.402443   47063 ssh_runner.go:195] Run: crio --version
	I0115 10:38:32.463150   47063 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0115 10:38:30.802324   46388 main.go:141] libmachine: (no-preload-824502) Calling .Start
	I0115 10:38:30.802525   46388 main.go:141] libmachine: (no-preload-824502) Ensuring networks are active...
	I0115 10:38:30.803127   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network default is active
	I0115 10:38:30.803476   46388 main.go:141] libmachine: (no-preload-824502) Ensuring network mk-no-preload-824502 is active
	I0115 10:38:30.803799   46388 main.go:141] libmachine: (no-preload-824502) Getting domain xml...
	I0115 10:38:30.804452   46388 main.go:141] libmachine: (no-preload-824502) Creating domain...
	I0115 10:38:32.173614   46388 main.go:141] libmachine: (no-preload-824502) Waiting to get IP...
	I0115 10:38:32.174650   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.175113   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.175211   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.175106   47808 retry.go:31] will retry after 275.127374ms: waiting for machine to come up
	I0115 10:38:32.451595   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.452150   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.452183   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.452095   47808 retry.go:31] will retry after 258.80121ms: waiting for machine to come up
	I0115 10:38:32.712701   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:32.713348   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:32.713531   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:32.713459   47808 retry.go:31] will retry after 440.227123ms: waiting for machine to come up
	I0115 10:38:33.155845   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.156595   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.156625   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.156500   47808 retry.go:31] will retry after 428.795384ms: waiting for machine to come up
	I0115 10:38:33.587781   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:33.588169   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:33.588190   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:33.588118   47808 retry.go:31] will retry after 720.536787ms: waiting for machine to come up
	I0115 10:38:34.310098   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:34.310640   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:34.310674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:34.310604   47808 retry.go:31] will retry after 841.490959ms: waiting for machine to come up
	I0115 10:38:30.274782   46387 retry.go:31] will retry after 7.853808987s: kubelet not initialised
	I0115 10:38:32.464592   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetIP
	I0115 10:38:32.467583   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.467962   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:32.467993   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:32.468218   47063 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:32.472463   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:32.488399   47063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:38:32.488488   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:32.535645   47063 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 10:38:32.535776   47063 ssh_runner.go:195] Run: which lz4
	I0115 10:38:32.541468   47063 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 10:38:32.547264   47063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 10:38:32.547297   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0115 10:38:34.427435   47063 crio.go:444] Took 1.886019 seconds to copy over tarball
	I0115 10:38:34.427510   47063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 10:38:31.879639   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.379656   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:32.408694   46584 api_server.go:72] duration metric: took 3.029823539s to wait for apiserver process to appear ...
	I0115 10:38:32.408737   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:32.408760   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.614020   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:36.614053   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:36.614068   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.687561   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.687606   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.134400   46387 retry.go:31] will retry after 7.988567077s: kubelet not initialised
	I0115 10:38:35.154196   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:35.154644   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:35.154674   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:35.154615   47808 retry.go:31] will retry after 1.099346274s: waiting for machine to come up
	I0115 10:38:36.255575   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:36.256111   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:36.256151   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:36.256038   47808 retry.go:31] will retry after 1.294045748s: waiting for machine to come up
	I0115 10:38:37.551734   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:37.552569   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:37.552593   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:37.552527   47808 retry.go:31] will retry after 1.720800907s: waiting for machine to come up
	I0115 10:38:39.275250   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:39.275651   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:39.275684   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:39.275595   47808 retry.go:31] will retry after 1.914509744s: waiting for machine to come up
	I0115 10:38:37.765711   47063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.338169875s)
	I0115 10:38:37.765741   47063 crio.go:451] Took 3.338279 seconds to extract the tarball
	I0115 10:38:37.765753   47063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 10:38:37.807016   47063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:37.858151   47063 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:38:37.858195   47063 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:38:37.858295   47063 ssh_runner.go:195] Run: crio config
	I0115 10:38:37.933830   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:37.933851   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:37.933872   47063 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:38:37.933896   47063 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-709012 NodeName:default-k8s-diff-port-709012 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:38:37.934040   47063 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-709012"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:38:37.934132   47063 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-709012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0115 10:38:37.934202   47063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:38:37.945646   47063 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:38:37.945728   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:38:37.957049   47063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0115 10:38:37.978770   47063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:38:37.995277   47063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0115 10:38:38.012964   47063 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0115 10:38:38.016803   47063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:38.028708   47063 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012 for IP: 192.168.39.125
	I0115 10:38:38.028740   47063 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:38.028887   47063 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:38:38.028926   47063 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:38:38.028988   47063 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/client.key
	I0115 10:38:38.048801   47063 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key.657bd91f
	I0115 10:38:38.048895   47063 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key
	I0115 10:38:38.049019   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:38:38.049058   47063 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:38:38.049075   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:38:38.049110   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:38:38.049149   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:38:38.049183   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:38:38.049241   47063 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:38.049848   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:38:38.078730   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:38:38.102069   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:38:38.124278   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:38:38.150354   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:38:38.173703   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:38:38.201758   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:38:38.227016   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:38:38.249876   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:38:38.271859   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:38:38.294051   47063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:38:38.316673   47063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:38:38.335128   47063 ssh_runner.go:195] Run: openssl version
	I0115 10:38:38.342574   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:38:38.355889   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361805   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.361871   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:38:38.369192   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:38:38.381493   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:38:38.391714   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396728   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.396787   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:38:38.402624   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:38:38.413957   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:38:38.425258   47063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430627   47063 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.430697   47063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:38:38.440362   47063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:38:38.453323   47063 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:38:38.458803   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:38:38.465301   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:38:38.471897   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:38:38.478274   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:38:38.484890   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:38:38.490909   47063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:38:38.496868   47063 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-709012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-709012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:38:38.496966   47063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:38:38.497015   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:38.539389   47063 cri.go:89] found id: ""
	I0115 10:38:38.539475   47063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:38:38.550998   47063 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:38:38.551020   47063 kubeadm.go:636] restartCluster start
	I0115 10:38:38.551076   47063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:38:38.561885   47063 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:38.563439   47063 kubeconfig.go:92] found "default-k8s-diff-port-709012" server: "https://192.168.39.125:8444"
	I0115 10:38:38.566482   47063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:38:38.576458   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:38.576521   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:38.588702   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.077323   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.077407   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.089885   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:39.577363   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:39.577441   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:39.591111   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:36.909069   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:36.917556   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:36.917594   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.409134   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.417305   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.417348   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:37.909251   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:37.916788   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:37.916824   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.409535   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:38.416538   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:38.416572   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:38.908929   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.863238   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.863279   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.863294   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:39.869897   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:39.869922   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:39.909113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.065422   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:40.065467   46584 api_server.go:103] status: https://192.168.72.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:40.408921   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:38:40.414320   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:38:40.424348   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:40.424378   46584 api_server.go:131] duration metric: took 8.015632919s to wait for apiserver health ...
	I0115 10:38:40.424390   46584 cni.go:84] Creating CNI manager for ""
	I0115 10:38:40.424398   46584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:40.426615   46584 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:40.427979   46584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:40.450675   46584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:40.478174   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:40.492540   46584 system_pods.go:59] 9 kube-system pods found
	I0115 10:38:40.492582   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492593   46584 system_pods.go:61] "coredns-5dd5756b68-w4p2z" [87d362df-5c29-4a04-b44f-c502cf6849bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:40.492609   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:40.492619   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:40.492633   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:40.492646   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:40.492658   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:40.492671   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:40.492687   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:40.492700   46584 system_pods.go:74] duration metric: took 14.502202ms to wait for pod list to return data ...
	I0115 10:38:40.492715   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:40.496471   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:40.496504   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:40.496517   46584 node_conditions.go:105] duration metric: took 3.794528ms to run NodePressure ...
	I0115 10:38:40.496538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:40.770732   46584 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777051   46584 kubeadm.go:787] kubelet initialised
	I0115 10:38:40.777118   46584 kubeadm.go:788] duration metric: took 6.307286ms waiting for restarted kubelet to initialise ...
	I0115 10:38:40.777139   46584 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:40.784605   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.798293   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798365   46584 pod_ready.go:81] duration metric: took 13.654765ms waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.798389   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.798402   46584 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.807236   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807276   46584 pod_ready.go:81] duration metric: took 8.862426ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.807289   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.807297   46584 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.813904   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813932   46584 pod_ready.go:81] duration metric: took 6.62492ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.813944   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "etcd-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.813951   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:40.882407   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882458   46584 pod_ready.go:81] duration metric: took 68.496269ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:40.882472   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:40.882485   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.282123   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282158   46584 pod_ready.go:81] duration metric: took 399.656962ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.282172   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.282181   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:41.683979   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684007   46584 pod_ready.go:81] duration metric: took 401.816493ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:41.684017   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-proxy-jqgfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:41.684023   46584 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.082465   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082490   46584 pod_ready.go:81] duration metric: took 398.460424ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.082501   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.082509   46584 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:42.484454   46584 pod_ready.go:97] node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484490   46584 pod_ready.go:81] duration metric: took 401.970108ms waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:42.484504   46584 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-781270" hosting pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:42.484513   46584 pod_ready.go:38] duration metric: took 1.707353329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:42.484534   46584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:42.499693   46584 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:42.499715   46584 kubeadm.go:640] restartCluster took 24.327423485s
	I0115 10:38:42.499733   46584 kubeadm.go:406] StartCluster complete in 24.381392225s
	I0115 10:38:42.499752   46584 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.499817   46584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:42.502965   46584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:42.503219   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:42.503253   46584 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:42.503356   46584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-781270"
	I0115 10:38:42.503374   46584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-781270"
	I0115 10:38:42.503383   46584 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-781270"
	I0115 10:38:42.503395   46584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-781270"
	W0115 10:38:42.503402   46584 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:42.503451   46584 config.go:182] Loaded profile config "embed-certs-781270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:42.503493   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503504   46584 addons.go:69] Setting metrics-server=true in profile "embed-certs-781270"
	I0115 10:38:42.503520   46584 addons.go:234] Setting addon metrics-server=true in "embed-certs-781270"
	W0115 10:38:42.503533   46584 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:42.503577   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.503826   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503850   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503855   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503871   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.503895   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.503924   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.522809   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0115 10:38:42.523025   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0115 10:38:42.523163   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0115 10:38:42.523260   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523382   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523755   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.523861   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.523990   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524323   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524345   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524415   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524585   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.524605   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.524825   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.524992   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525017   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525375   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.525412   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.525571   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.525747   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.528762   46584 addons.go:234] Setting addon default-storageclass=true in "embed-certs-781270"
	W0115 10:38:42.528781   46584 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:42.528807   46584 host.go:66] Checking if "embed-certs-781270" exists ...
	I0115 10:38:42.529117   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.529140   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.544693   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I0115 10:38:42.545013   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0115 10:38:42.545528   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.545628   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.546235   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546265   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546268   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.546280   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.546650   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546687   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.546820   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.546918   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I0115 10:38:42.547068   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.547459   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.548255   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.548269   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.548859   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.549002   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.549393   46584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:42.549415   46584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:42.549597   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.551555   46584 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:42.552918   46584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:42.554551   46584 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.554573   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:42.554591   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.554552   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:42.554648   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:42.554662   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.561284   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.561706   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.561854   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.562023   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.562123   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.562179   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.562229   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.564058   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564432   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.564529   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.564798   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.564977   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.565148   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.565282   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.570688   46584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0115 10:38:42.571242   46584 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:42.571724   46584 main.go:141] libmachine: Using API Version  1
	I0115 10:38:42.571749   46584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:42.571989   46584 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:42.572135   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetState
	I0115 10:38:42.573685   46584 main.go:141] libmachine: (embed-certs-781270) Calling .DriverName
	I0115 10:38:42.573936   46584 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.573952   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:42.573969   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHHostname
	I0115 10:38:42.576948   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577272   46584 main.go:141] libmachine: (embed-certs-781270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:6d:ca", ip: ""} in network mk-embed-certs-781270: {Iface:virbr2 ExpiryTime:2024-01-15 11:28:58 +0000 UTC Type:0 Mac:52:54:00:58:6d:ca Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:embed-certs-781270 Clientid:01:52:54:00:58:6d:ca}
	I0115 10:38:42.577312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | domain embed-certs-781270 has defined IP address 192.168.72.222 and MAC address 52:54:00:58:6d:ca in network mk-embed-certs-781270
	I0115 10:38:42.577680   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHPort
	I0115 10:38:42.577866   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHKeyPath
	I0115 10:38:42.577988   46584 main.go:141] libmachine: (embed-certs-781270) Calling .GetSSHUsername
	I0115 10:38:42.578134   46584 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/embed-certs-781270/id_rsa Username:docker}
	I0115 10:38:42.687267   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:42.687293   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:42.707058   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:42.707083   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:42.727026   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:42.745278   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:42.777425   46584 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:42.777450   46584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:42.779528   46584 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:42.832109   46584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:43.011971   46584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-781270" context rescaled to 1 replicas
	I0115 10:38:43.012022   46584 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:43.014704   46584 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:43.016005   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:44.039814   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.294486297s)
	I0115 10:38:44.039891   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312831152s)
	I0115 10:38:44.039895   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039928   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.039946   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040024   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040264   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040283   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040293   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040302   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040391   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040412   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040427   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040451   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.040461   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.040613   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040744   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040750   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.040755   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.040791   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.040800   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054113   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.054134   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.054409   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.054454   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.054469   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.151470   46584 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.135429651s)
	I0115 10:38:44.151517   46584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:44.151560   46584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319411531s)
	I0115 10:38:44.151601   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.151626   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.151954   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.151973   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152001   46584 main.go:141] libmachine: Making call to close driver server
	I0115 10:38:44.152012   46584 main.go:141] libmachine: (embed-certs-781270) Calling .Close
	I0115 10:38:44.152312   46584 main.go:141] libmachine: (embed-certs-781270) DBG | Closing plugin on server side
	I0115 10:38:44.152317   46584 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:38:44.152328   46584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:38:44.152338   46584 addons.go:470] Verifying addon metrics-server=true in "embed-certs-781270"
	I0115 10:38:44.155687   46584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:41.191855   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:41.192271   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:41.192310   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:41.192239   47808 retry.go:31] will retry after 2.364591434s: waiting for machine to come up
	I0115 10:38:43.560150   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:43.560624   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:43.560648   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:43.560581   47808 retry.go:31] will retry after 3.204170036s: waiting for machine to come up
	I0115 10:38:40.076788   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.076875   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.089217   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:40.577351   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:40.577448   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:40.593294   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.076625   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.076730   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.092700   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:41.577413   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:41.577513   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:41.592266   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.076755   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.076862   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.090411   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:42.576920   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:42.576982   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:42.589590   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.077312   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.077410   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.089732   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:43.576781   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:43.576857   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:43.592414   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.076854   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.076922   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.089009   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.576614   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:44.576713   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:44.592137   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:44.157450   46584 addons.go:505] enable addons completed in 1.654202196s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:38:46.156830   46584 node_ready.go:58] node "embed-certs-781270" has status "Ready":"False"
	I0115 10:38:46.129496   46387 retry.go:31] will retry after 7.881779007s: kubelet not initialised
	I0115 10:38:46.766674   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:46.767050   46388 main.go:141] libmachine: (no-preload-824502) DBG | unable to find current IP address of domain no-preload-824502 in network mk-no-preload-824502
	I0115 10:38:46.767072   46388 main.go:141] libmachine: (no-preload-824502) DBG | I0115 10:38:46.767013   47808 retry.go:31] will retry after 3.09324278s: waiting for machine to come up
	I0115 10:38:45.076819   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.076882   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.092624   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:45.576654   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:45.576724   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:45.590306   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.076821   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.076920   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.090883   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:46.577506   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:46.577590   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:46.590379   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.076909   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.076997   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.088742   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:47.577287   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:47.577371   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:47.589014   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.076538   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.076608   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.088956   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.576474   47063 api_server.go:166] Checking apiserver status ...
	I0115 10:38:48.576573   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:38:48.588122   47063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:38:48.588146   47063 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:38:48.588153   47063 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:38:48.588162   47063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:38:48.588214   47063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:38:48.631367   47063 cri.go:89] found id: ""
	I0115 10:38:48.631441   47063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:38:48.648653   47063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:38:48.657948   47063 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:38:48.658017   47063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668103   47063 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:38:48.668124   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:48.787890   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.559039   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.767002   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:49.842165   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:47.155176   46584 node_ready.go:49] node "embed-certs-781270" has status "Ready":"True"
	I0115 10:38:47.155200   46584 node_ready.go:38] duration metric: took 3.003671558s waiting for node "embed-certs-781270" to be "Ready" ...
	I0115 10:38:47.155212   46584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:47.162248   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:49.169885   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:51.190513   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:49.864075   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864515   46388 main.go:141] libmachine: (no-preload-824502) Found IP for machine: 192.168.50.136
	I0115 10:38:49.864538   46388 main.go:141] libmachine: (no-preload-824502) Reserving static IP address...
	I0115 10:38:49.864554   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has current primary IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.864990   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.865034   46388 main.go:141] libmachine: (no-preload-824502) DBG | skip adding static IP to network mk-no-preload-824502 - found existing host DHCP lease matching {name: "no-preload-824502", mac: "52:54:00:e7:ab:81", ip: "192.168.50.136"}
	I0115 10:38:49.865052   46388 main.go:141] libmachine: (no-preload-824502) Reserved static IP address: 192.168.50.136
	I0115 10:38:49.865073   46388 main.go:141] libmachine: (no-preload-824502) Waiting for SSH to be available...
	I0115 10:38:49.865115   46388 main.go:141] libmachine: (no-preload-824502) DBG | Getting to WaitForSSH function...
	I0115 10:38:49.867410   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867671   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.867708   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.867864   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH client type: external
	I0115 10:38:49.867924   46388 main.go:141] libmachine: (no-preload-824502) DBG | Using SSH private key: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa (-rw-------)
	I0115 10:38:49.867961   46388 main.go:141] libmachine: (no-preload-824502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 10:38:49.867983   46388 main.go:141] libmachine: (no-preload-824502) DBG | About to run SSH command:
	I0115 10:38:49.867994   46388 main.go:141] libmachine: (no-preload-824502) DBG | exit 0
	I0115 10:38:49.966638   46388 main.go:141] libmachine: (no-preload-824502) DBG | SSH cmd err, output: <nil>: 
	I0115 10:38:49.967072   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetConfigRaw
	I0115 10:38:49.967925   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:49.970409   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.970811   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.970846   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.971099   46388 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/config.json ...
	I0115 10:38:49.971300   46388 machine.go:88] provisioning docker machine ...
	I0115 10:38:49.971327   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:49.971561   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971757   46388 buildroot.go:166] provisioning hostname "no-preload-824502"
	I0115 10:38:49.971783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:49.971970   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:49.974279   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974723   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:49.974758   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:49.974917   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:49.975088   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975247   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:49.975460   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:49.975640   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:49.976081   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:49.976099   46388 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-824502 && echo "no-preload-824502" | sudo tee /etc/hostname
	I0115 10:38:50.121181   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-824502
	
	I0115 10:38:50.121206   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.123821   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124162   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.124194   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.124371   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.124588   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124788   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.124940   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.125103   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.125410   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.125429   46388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-824502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-824502/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-824502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:38:50.259649   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:38:50.259680   46388 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17953-4821/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-4821/.minikube}
	I0115 10:38:50.259710   46388 buildroot.go:174] setting up certificates
	I0115 10:38:50.259724   46388 provision.go:83] configureAuth start
	I0115 10:38:50.259736   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetMachineName
	I0115 10:38:50.260022   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:50.262296   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262683   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.262704   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.262848   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.265340   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265715   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.265743   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.265885   46388 provision.go:138] copyHostCerts
	I0115 10:38:50.265942   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem, removing ...
	I0115 10:38:50.265953   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem
	I0115 10:38:50.266025   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/ca.pem (1078 bytes)
	I0115 10:38:50.266128   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem, removing ...
	I0115 10:38:50.266143   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem
	I0115 10:38:50.266178   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/cert.pem (1123 bytes)
	I0115 10:38:50.266258   46388 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem, removing ...
	I0115 10:38:50.266268   46388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem
	I0115 10:38:50.266296   46388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-4821/.minikube/key.pem (1675 bytes)
	I0115 10:38:50.266359   46388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem org=jenkins.no-preload-824502 san=[192.168.50.136 192.168.50.136 localhost 127.0.0.1 minikube no-preload-824502]
	I0115 10:38:50.666513   46388 provision.go:172] copyRemoteCerts
	I0115 10:38:50.666584   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:38:50.666615   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.669658   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670109   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.670162   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.670410   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.670632   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.670812   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.671067   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:50.774944   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 10:38:50.799533   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0115 10:38:50.824210   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 10:38:50.849191   46388 provision.go:86] duration metric: configureAuth took 589.452836ms
	I0115 10:38:50.849224   46388 buildroot.go:189] setting minikube options for container-runtime
	I0115 10:38:50.849455   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:38:50.849560   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:50.852884   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853291   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:50.853346   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:50.853508   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:50.853746   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.853936   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:50.854105   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:50.854244   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:50.854708   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:50.854735   46388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:38:51.246971   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:38:51.246997   46388 machine.go:91] provisioned docker machine in 1.275679147s
	I0115 10:38:51.247026   46388 start.go:300] post-start starting for "no-preload-824502" (driver="kvm2")
	I0115 10:38:51.247043   46388 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:38:51.247063   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.247450   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:38:51.247481   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.250477   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250751   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.250783   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.250951   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.251085   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.251227   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.251308   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.350552   46388 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:38:51.355893   46388 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 10:38:51.355918   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/addons for local assets ...
	I0115 10:38:51.355994   46388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-4821/.minikube/files for local assets ...
	I0115 10:38:51.356096   46388 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem -> 134822.pem in /etc/ssl/certs
	I0115 10:38:51.356220   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 10:38:51.366598   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:38:51.393765   46388 start.go:303] post-start completed in 146.702407ms
	I0115 10:38:51.393798   46388 fix.go:56] fixHost completed within 20.616533939s
	I0115 10:38:51.393826   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.396990   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397531   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.397563   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.397785   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.398006   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398190   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.398367   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.398602   46388 main.go:141] libmachine: Using SSH client type: native
	I0115 10:38:51.399038   46388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0115 10:38:51.399057   46388 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 10:38:51.532940   46388 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705315131.477577825
	
	I0115 10:38:51.532962   46388 fix.go:206] guest clock: 1705315131.477577825
	I0115 10:38:51.532971   46388 fix.go:219] Guest: 2024-01-15 10:38:51.477577825 +0000 UTC Remote: 2024-01-15 10:38:51.393803771 +0000 UTC m=+361.851018624 (delta=83.774054ms)
	I0115 10:38:51.533006   46388 fix.go:190] guest clock delta is within tolerance: 83.774054ms
	I0115 10:38:51.533011   46388 start.go:83] releasing machines lock for "no-preload-824502", held for 20.755785276s
	I0115 10:38:51.533031   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.533296   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:51.536532   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537167   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.537206   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.537411   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538058   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538236   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:38:51.538395   46388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:38:51.538461   46388 ssh_runner.go:195] Run: cat /version.json
	I0115 10:38:51.538485   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.538492   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:38:51.541387   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541419   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541791   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541836   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.541878   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:51.541952   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.541959   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:51.542137   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542219   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:38:51.542317   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542396   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:38:51.542477   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.542535   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:38:51.542697   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:38:51.668594   46388 ssh_runner.go:195] Run: systemctl --version
	I0115 10:38:51.675328   46388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:38:51.822660   46388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 10:38:51.830242   46388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 10:38:51.830318   46388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:38:51.846032   46388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 10:38:51.846067   46388 start.go:475] detecting cgroup driver to use...
	I0115 10:38:51.846147   46388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:38:51.863608   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:38:51.875742   46388 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:38:51.875810   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:38:51.888307   46388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:38:51.902327   46388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:38:52.027186   46388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:38:52.170290   46388 docker.go:233] disabling docker service ...
	I0115 10:38:52.170372   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:38:52.184106   46388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:38:52.195719   46388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:38:52.304630   46388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:38:52.420312   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:38:52.434213   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:38:52.453883   46388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:38:52.453946   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.464662   46388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:38:52.464726   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.474291   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.483951   46388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:38:52.493132   46388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:38:52.503668   46388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:38:52.512336   46388 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 10:38:52.512410   46388 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 10:38:52.529602   46388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:38:52.541735   46388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:38:52.664696   46388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:38:52.844980   46388 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:38:52.845051   46388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:38:52.850380   46388 start.go:543] Will wait 60s for crictl version
	I0115 10:38:52.850463   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:52.854500   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:38:52.890488   46388 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0115 10:38:52.890595   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:52.944999   46388 ssh_runner.go:195] Run: crio --version
	I0115 10:38:53.005494   46388 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0115 10:38:54.017897   46387 retry.go:31] will retry after 11.956919729s: kubelet not initialised
	I0115 10:38:53.006783   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetIP
	I0115 10:38:53.009509   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.009903   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:38:53.009934   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:38:53.010135   46388 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0115 10:38:53.014612   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:38:53.029014   46388 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 10:38:53.029063   46388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:38:53.073803   46388 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0115 10:38:53.073839   46388 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 10:38:53.073906   46388 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.073943   46388 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.073979   46388 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.073945   46388 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.073914   46388 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.073932   46388 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.073931   46388 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.073918   46388 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0115 10:38:53.075357   46388 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.075303   46388 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.075478   46388 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.075515   46388 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.075532   46388 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.075482   46388 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.075483   46388 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.234170   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.248000   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0115 10:38:53.264387   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.289576   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.303961   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.326078   46388 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0115 10:38:53.326132   46388 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.326176   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.331268   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.334628   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.366099   46388 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.426012   46388 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0115 10:38:53.426058   46388 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.426106   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.426316   46388 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0115 10:38:53.426346   46388 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.426377   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505102   46388 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0115 10:38:53.505194   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0115 10:38:53.505201   46388 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.505286   46388 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0115 10:38:53.505358   46388 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.505410   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.505334   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.507596   46388 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0115 10:38:53.507630   46388 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.507674   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.544052   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0115 10:38:53.544142   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0115 10:38:53.544078   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:53.544263   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0115 10:38:53.544458   46388 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0115 10:38:53.544505   46388 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.544550   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:38:53.568682   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0115 10:38:53.568786   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.568808   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0115 10:38:53.681576   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681671   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0115 10:38:53.681777   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:53.681779   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:38:53.681918   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.681990   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:53.682040   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0115 10:38:53.682108   46388 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681996   46388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0115 10:38:53.682157   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0115 10:38:53.681927   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0115 10:38:53.682277   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:53.728102   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:53.728204   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:38:49.944443   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:38:49.944529   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.445085   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:50.945395   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.444784   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:51.944622   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.444886   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:38:52.460961   47063 api_server.go:72] duration metric: took 2.516519088s to wait for apiserver process to appear ...
	I0115 10:38:52.460980   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:38:52.460996   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:52.461498   47063 api_server.go:269] stopped: https://192.168.39.125:8444/healthz: Get "https://192.168.39.125:8444/healthz": dial tcp 192.168.39.125:8444: connect: connection refused
	I0115 10:38:52.961968   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:53.672555   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:55.685156   46584 pod_ready.go:102] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"False"
	I0115 10:38:56.172493   46584 pod_ready.go:92] pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.172521   46584 pod_ready.go:81] duration metric: took 9.010249071s waiting for pod "coredns-5dd5756b68-n59ft" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.172534   46584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.178057   46584 pod_ready.go:97] error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178080   46584 pod_ready.go:81] duration metric: took 5.538491ms waiting for pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:56.178092   46584 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-w4p2z" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w4p2z" not found
	I0115 10:38:56.178100   46584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185048   46584 pod_ready.go:92] pod "etcd-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.185071   46584 pod_ready.go:81] duration metric: took 6.962528ms waiting for pod "etcd-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.185082   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190244   46584 pod_ready.go:92] pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.190263   46584 pod_ready.go:81] duration metric: took 5.173778ms waiting for pod "kube-apiserver-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.190275   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196537   46584 pod_ready.go:92] pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.196555   46584 pod_ready.go:81] duration metric: took 6.272551ms waiting for pod "kube-controller-manager-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.196566   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367735   46584 pod_ready.go:92] pod "kube-proxy-jqgfc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.367766   46584 pod_ready.go:81] duration metric: took 171.191874ms waiting for pod "kube-proxy-jqgfc" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.367779   46584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.209201   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.209232   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.209247   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.283870   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:38:56.283914   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:38:56.461166   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.476935   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.476968   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:56.961147   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:56.966917   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:38:56.966949   47063 api_server.go:103] status: https://192.168.39.125:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:38:57.461505   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:38:57.467290   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:38:57.482673   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:38:57.482709   47063 api_server.go:131] duration metric: took 5.021721974s to wait for apiserver health ...
	I0115 10:38:57.482721   47063 cni.go:84] Creating CNI manager for ""
	I0115 10:38:57.482729   47063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:38:57.484809   47063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:38:57.486522   47063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:38:57.503036   47063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:38:57.523094   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:38:57.539289   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:38:57.539332   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:38:57.539342   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:38:57.539353   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:38:57.539361   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:38:57.539367   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:38:57.539372   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:38:57.539378   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:38:57.539392   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:38:57.539400   47063 system_pods.go:74] duration metric: took 16.288236ms to wait for pod list to return data ...
	I0115 10:38:57.539415   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:38:57.547016   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:38:57.547043   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:38:57.547053   47063 node_conditions.go:105] duration metric: took 7.632954ms to run NodePressure ...
	I0115 10:38:57.547069   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:38:57.838097   47063 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847919   47063 kubeadm.go:787] kubelet initialised
	I0115 10:38:57.847945   47063 kubeadm.go:788] duration metric: took 9.818012ms waiting for restarted kubelet to initialise ...
	I0115 10:38:57.847960   47063 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:57.860753   47063 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.866623   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866666   47063 pod_ready.go:81] duration metric: took 5.881593ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.866679   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.866687   47063 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.873742   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873772   47063 pod_ready.go:81] duration metric: took 7.070689ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.873787   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.873795   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.881283   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881313   47063 pod_ready.go:81] duration metric: took 7.502343ms waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.881328   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.881335   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:57.927473   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927504   47063 pod_ready.go:81] duration metric: took 46.159848ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:57.927516   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:57.927523   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.329002   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329029   47063 pod_ready.go:81] duration metric: took 401.499694ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.329039   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-proxy-d8lcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.329046   47063 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.727362   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727394   47063 pod_ready.go:81] duration metric: took 398.336577ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:58.727411   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:58.727420   47063 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:59.138162   47063 pod_ready.go:97] node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138195   47063 pod_ready.go:81] duration metric: took 410.766568ms waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:38:59.138207   47063 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-709012" hosting pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:38:59.138214   47063 pod_ready.go:38] duration metric: took 1.290244752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:38:59.138232   47063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:38:59.173438   47063 ops.go:34] apiserver oom_adj: -16
	I0115 10:38:59.173463   47063 kubeadm.go:640] restartCluster took 20.622435902s
	I0115 10:38:59.173473   47063 kubeadm.go:406] StartCluster complete in 20.676611158s
	I0115 10:38:59.173494   47063 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.173598   47063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:38:59.176160   47063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:38:59.176389   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:38:59.176558   47063 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:38:59.176645   47063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176652   47063 config.go:182] Loaded profile config "default-k8s-diff-port-709012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:38:59.176680   47063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.176696   47063 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:38:59.176706   47063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.176725   47063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-709012"
	I0115 10:38:59.176768   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177130   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177163   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177188   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177220   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.177254   47063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-709012"
	I0115 10:38:59.177288   47063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.177305   47063 addons.go:243] addon metrics-server should already be in state true
	I0115 10:38:59.177390   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.177796   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.177849   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.182815   47063 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-709012" context rescaled to 1 replicas
	I0115 10:38:59.182849   47063 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:38:59.184762   47063 out.go:177] * Verifying Kubernetes components...
	I0115 10:38:59.186249   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:38:59.196870   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0115 10:38:59.197111   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37331
	I0115 10:38:59.197493   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.197595   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.198074   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198096   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198236   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.198264   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.198410   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.198620   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.198634   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.199252   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.199278   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.202438   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0115 10:38:59.202957   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.203462   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.203489   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.203829   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.204271   47063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-709012"
	W0115 10:38:59.204295   47063 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:38:59.204322   47063 host.go:66] Checking if "default-k8s-diff-port-709012" exists ...
	I0115 10:38:59.204406   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204434   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.204728   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.204768   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.220973   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0115 10:38:59.221383   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.221873   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.221898   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.222330   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.222537   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.223337   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0115 10:38:59.223746   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35993
	I0115 10:38:59.224454   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.224557   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.227071   47063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:38:59.225090   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.225234   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.228609   47063 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.228624   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:38:59.228638   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.228668   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229046   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.229064   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.229415   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229515   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.229671   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.230070   47063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:38:59.230093   47063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:38:59.232470   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.233532   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.235985   47063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:38:56.206357   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.524032218s)
	I0115 10:38:56.206399   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0115 10:38:56.206444   46388 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: (2.52429359s)
	I0115 10:38:56.206494   46388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206580   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.524566038s)
	I0115 10:38:56.206594   46388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:38:56.206609   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206684   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.52488513s)
	I0115 10:38:56.206806   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0115 10:38:56.206718   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.524535788s)
	I0115 10:38:56.206824   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0115 10:38:56.206756   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.524930105s)
	I0115 10:38:56.206843   46388 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.206863   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206780   46388 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.478563083s)
	I0115 10:38:56.206890   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0115 10:38:56.206908   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0115 10:38:56.986404   46388 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0115 10:38:56.986480   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0115 10:38:56.986513   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:56.986555   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0115 10:38:59.063376   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.076785591s)
	I0115 10:38:59.063421   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0115 10:38:59.063449   46388 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.063494   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0115 10:38:59.234530   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.234543   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.237273   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.237334   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:38:59.237349   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:38:59.237367   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.237458   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.237624   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.237776   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.240471   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242356   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.242442   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.242483   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.242538   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.245246   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.245394   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.251844   47063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I0115 10:38:59.252344   47063 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:38:59.252855   47063 main.go:141] libmachine: Using API Version  1
	I0115 10:38:59.252876   47063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:38:59.253245   47063 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:38:59.253439   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetState
	I0115 10:38:59.255055   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .DriverName
	I0115 10:38:59.255299   47063 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.255315   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:38:59.255331   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHHostname
	I0115 10:38:59.258732   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259370   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:83:1c", ip: ""} in network mk-default-k8s-diff-port-709012: {Iface:virbr1 ExpiryTime:2024-01-15 11:38:23 +0000 UTC Type:0 Mac:52:54:00:fd:83:1c Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:default-k8s-diff-port-709012 Clientid:01:52:54:00:fd:83:1c}
	I0115 10:38:59.259408   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | domain default-k8s-diff-port-709012 has defined IP address 192.168.39.125 and MAC address 52:54:00:fd:83:1c in network mk-default-k8s-diff-port-709012
	I0115 10:38:59.259554   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHPort
	I0115 10:38:59.259739   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHKeyPath
	I0115 10:38:59.259915   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .GetSSHUsername
	I0115 10:38:59.260060   47063 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/default-k8s-diff-port-709012/id_rsa Username:docker}
	I0115 10:38:59.380593   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:38:59.380623   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:38:59.387602   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:38:59.409765   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:38:59.434624   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:38:59.434655   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:38:59.514744   47063 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:38:59.514778   47063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:38:59.528401   47063 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:38:59.528428   47063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:38:59.552331   47063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:00.775084   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.365286728s)
	I0115 10:39:00.775119   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387483878s)
	I0115 10:39:00.775251   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775268   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775195   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775319   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.775697   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775737   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.775778   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.775791   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.775805   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.775818   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.776009   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.776030   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778922   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.778939   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.778949   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.778959   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.779172   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.780377   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.780395   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.787873   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.787893   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.788142   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.788161   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.882725   47063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330338587s)
	I0115 10:39:00.882775   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.882792   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883118   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883140   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883144   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) DBG | Closing plugin on server side
	I0115 10:39:00.883150   47063 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:00.883166   47063 main.go:141] libmachine: (default-k8s-diff-port-709012) Calling .Close
	I0115 10:39:00.883494   47063 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:00.883513   47063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:00.883523   47063 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-709012"
	I0115 10:39:00.887782   47063 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:38:56.767524   46584 pod_ready.go:92] pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace has status "Ready":"True"
	I0115 10:38:56.767555   46584 pod_ready.go:81] duration metric: took 399.766724ms waiting for pod "kube-scheduler-embed-certs-781270" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:56.767569   46584 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	I0115 10:38:58.776515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:00.777313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:03.358192   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.294671295s)
	I0115 10:39:03.358221   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0115 10:39:03.358249   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:03.358296   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0115 10:39:00.889422   47063 addons.go:505] enable addons completed in 1.71286662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:01.533332   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.534081   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:03.274613   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.277132   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.981700   46387 kubeadm.go:787] kubelet initialised
	I0115 10:39:05.981726   46387 kubeadm.go:788] duration metric: took 49.462651853s waiting for restarted kubelet to initialise ...
	I0115 10:39:05.981737   46387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:05.987142   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993872   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.993896   46387 pod_ready.go:81] duration metric: took 6.725677ms waiting for pod "coredns-5644d7b6d9-5qcrz" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.993920   46387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999103   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:05.999133   46387 pod_ready.go:81] duration metric: took 5.204706ms waiting for pod "coredns-5644d7b6d9-rgrbc" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:05.999148   46387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004449   46387 pod_ready.go:92] pod "etcd-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.004472   46387 pod_ready.go:81] duration metric: took 5.315188ms waiting for pod "etcd-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.004484   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010187   46387 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.010209   46387 pod_ready.go:81] duration metric: took 5.716918ms waiting for pod "kube-apiserver-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.010221   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380715   46387 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.380742   46387 pod_ready.go:81] duration metric: took 370.513306ms waiting for pod "kube-controller-manager-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.380756   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780865   46387 pod_ready.go:92] pod "kube-proxy-w9fdn" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:06.780887   46387 pod_ready.go:81] duration metric: took 400.122851ms waiting for pod "kube-proxy-w9fdn" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:06.780899   46387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179755   46387 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.179785   46387 pod_ready.go:81] duration metric: took 398.879027ms waiting for pod "kube-scheduler-old-k8s-version-206509" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.179798   46387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.188315   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:05.429866   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.071542398s)
	I0115 10:39:05.429896   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0115 10:39:05.429927   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:05.429988   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0115 10:39:08.115120   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.685106851s)
	I0115 10:39:08.115147   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0115 10:39:08.115179   46388 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:08.115226   46388 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0115 10:39:05.540836   47063 node_ready.go:58] node "default-k8s-diff-port-709012" has status "Ready":"False"
	I0115 10:39:07.032884   47063 node_ready.go:49] node "default-k8s-diff-port-709012" has status "Ready":"True"
	I0115 10:39:07.032914   47063 node_ready.go:38] duration metric: took 7.504464113s waiting for node "default-k8s-diff-port-709012" to be "Ready" ...
	I0115 10:39:07.032928   47063 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:07.042672   47063 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048131   47063 pod_ready.go:92] pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.048156   47063 pod_ready.go:81] duration metric: took 5.456337ms waiting for pod "coredns-5dd5756b68-dzd2f" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.048167   47063 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053470   47063 pod_ready.go:92] pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:07.053492   47063 pod_ready.go:81] duration metric: took 5.316882ms waiting for pod "etcd-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.053504   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.061828   47063 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.562201   47063 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.562235   47063 pod_ready.go:81] duration metric: took 2.508719163s waiting for pod "kube-apiserver-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.562248   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571588   47063 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.571614   47063 pod_ready.go:81] duration metric: took 9.356396ms waiting for pod "kube-controller-manager-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.571628   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580269   47063 pod_ready.go:92] pod "kube-proxy-d8lcq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.580291   47063 pod_ready.go:81] duration metric: took 8.654269ms waiting for pod "kube-proxy-d8lcq" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.580305   47063 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833621   47063 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:09.833646   47063 pod_ready.go:81] duration metric: took 253.332081ms waiting for pod "kube-scheduler-default-k8s-diff-port-709012" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:09.833658   47063 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:07.776707   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:09.777515   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.687740   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.187565   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.092236   46388 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.976986955s)
	I0115 10:39:11.092266   46388 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17953-4821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0115 10:39:11.092290   46388 cache_images.go:123] Successfully loaded all cached images
	I0115 10:39:11.092296   46388 cache_images.go:92] LoadImages completed in 18.018443053s
	I0115 10:39:11.092373   46388 ssh_runner.go:195] Run: crio config
	I0115 10:39:11.155014   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:11.155036   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:11.155056   46388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:39:11.155074   46388 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-824502 NodeName:no-preload-824502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:39:11.155203   46388 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-824502"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:39:11.155265   46388 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-824502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:39:11.155316   46388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0115 10:39:11.165512   46388 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:39:11.165586   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:39:11.175288   46388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0115 10:39:11.192730   46388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0115 10:39:11.209483   46388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0115 10:39:11.228296   46388 ssh_runner.go:195] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0115 10:39:11.232471   46388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:39:11.245041   46388 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502 for IP: 192.168.50.136
	I0115 10:39:11.245106   46388 certs.go:190] acquiring lock for shared ca certs: {Name:mkda51f90fbe928ab2568ae32f486bd871a1d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:11.245298   46388 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key
	I0115 10:39:11.245364   46388 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key
	I0115 10:39:11.245456   46388 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.key
	I0115 10:39:11.245551   46388 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key.cb5546de
	I0115 10:39:11.245617   46388 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key
	I0115 10:39:11.245769   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem (1338 bytes)
	W0115 10:39:11.245808   46388 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482_empty.pem, impossibly tiny 0 bytes
	I0115 10:39:11.245823   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:39:11.245855   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/ca.pem (1078 bytes)
	I0115 10:39:11.245895   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:39:11.245937   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/certs/home/jenkins/minikube-integration/17953-4821/.minikube/certs/key.pem (1675 bytes)
	I0115 10:39:11.246018   46388 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem (1708 bytes)
	I0115 10:39:11.246987   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:39:11.272058   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 10:39:11.295425   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:39:11.320271   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 10:39:11.347161   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:39:11.372529   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:39:11.396765   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:39:11.419507   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 10:39:11.441814   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/certs/13482.pem --> /usr/share/ca-certificates/13482.pem (1338 bytes)
	I0115 10:39:11.463306   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/ssl/certs/134822.pem --> /usr/share/ca-certificates/134822.pem (1708 bytes)
	I0115 10:39:11.485830   46388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-4821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:39:11.510306   46388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:39:11.527095   46388 ssh_runner.go:195] Run: openssl version
	I0115 10:39:11.532483   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13482.pem && ln -fs /usr/share/ca-certificates/13482.pem /etc/ssl/certs/13482.pem"
	I0115 10:39:11.543447   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548266   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:36 /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.548330   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13482.pem
	I0115 10:39:11.554228   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13482.pem /etc/ssl/certs/51391683.0"
	I0115 10:39:11.564891   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134822.pem && ln -fs /usr/share/ca-certificates/134822.pem /etc/ssl/certs/134822.pem"
	I0115 10:39:11.574809   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579217   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:36 /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.579257   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134822.pem
	I0115 10:39:11.584745   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134822.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 10:39:11.596117   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:39:11.606888   46388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611567   46388 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.611632   46388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:39:11.617307   46388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:39:11.627893   46388 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:39:11.632530   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0115 10:39:11.638562   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0115 10:39:11.644605   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0115 10:39:11.650917   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0115 10:39:11.656970   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0115 10:39:11.662948   46388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0115 10:39:11.669010   46388 kubeadm.go:404] StartCluster: {Name:no-preload-824502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-824502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:39:11.669093   46388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:39:11.669144   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:11.707521   46388 cri.go:89] found id: ""
	I0115 10:39:11.707594   46388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:39:11.719407   46388 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0115 10:39:11.719445   46388 kubeadm.go:636] restartCluster start
	I0115 10:39:11.719511   46388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0115 10:39:11.729609   46388 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.730839   46388 kubeconfig.go:92] found "no-preload-824502" server: "https://192.168.50.136:8443"
	I0115 10:39:11.733782   46388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0115 10:39:11.744363   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:11.744437   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:11.757697   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.245289   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.245389   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.258680   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:12.745234   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:12.745334   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:12.757934   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.244459   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.244549   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.256860   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:13.745400   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:13.745486   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:13.759185   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:14.244696   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.244774   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.257692   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:11.842044   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.339850   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:11.779637   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.278260   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.187668   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.187834   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:14.745104   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:14.745191   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:14.757723   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.244680   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.244760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.259042   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:15.744599   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:15.744692   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:15.761497   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.245412   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.245507   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.260040   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.744664   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:16.744752   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:16.757209   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.244739   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.244826   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.257922   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:17.744411   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:17.744528   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:17.756304   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.244475   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.244580   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.257372   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:18.744977   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:18.745072   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:18.758201   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:19.244832   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.244906   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.257468   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:16.342438   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:18.845282   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:16.776399   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.276057   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:20.686392   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:22.687613   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:19.745014   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:19.745076   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:19.757274   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.245246   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.245307   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.257735   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:20.745333   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:20.745422   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:20.757945   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.245022   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.245112   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.257351   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.744980   46388 api_server.go:166] Checking apiserver status ...
	I0115 10:39:21.745057   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0115 10:39:21.756073   46388 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0115 10:39:21.756099   46388 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0115 10:39:21.756107   46388 kubeadm.go:1135] stopping kube-system containers ...
	I0115 10:39:21.756116   46388 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0115 10:39:21.756167   46388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:39:21.800172   46388 cri.go:89] found id: ""
	I0115 10:39:21.800229   46388 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0115 10:39:21.815607   46388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:39:21.826460   46388 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:39:21.826525   46388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835735   46388 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0115 10:39:21.835758   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:21.963603   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.673572   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.882139   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:22.975846   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:23.061284   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:39:23.061391   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:23.561760   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.061736   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:24.562127   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:21.340520   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.340897   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:21.776123   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:23.776196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.777003   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:24.688163   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.187371   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:25.061818   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.561582   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:39:25.584837   46388 api_server.go:72] duration metric: took 2.523550669s to wait for apiserver process to appear ...
	I0115 10:39:25.584868   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:39:25.584893   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.585385   46388 api_server.go:269] stopped: https://192.168.50.136:8443/healthz: Get "https://192.168.50.136:8443/healthz": dial tcp 192.168.50.136:8443: connect: connection refused
	I0115 10:39:26.085248   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.546970   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.547007   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.547026   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:29.597433   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0115 10:39:29.597466   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0115 10:39:29.597482   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:25.342652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:27.343320   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.840652   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:29.625537   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:29.625587   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.085614   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.091715   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.091745   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:30.585298   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:30.591889   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0115 10:39:30.591919   46388 api_server.go:103] status: https://192.168.50.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0115 10:39:31.086028   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:39:31.091297   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:39:31.099702   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:39:31.099726   46388 api_server.go:131] duration metric: took 5.514851771s to wait for apiserver health ...
	I0115 10:39:31.099735   46388 cni.go:84] Creating CNI manager for ""
	I0115 10:39:31.099741   46388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:39:31.102193   46388 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:39:28.275539   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:30.276634   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.104002   46388 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:39:31.130562   46388 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:39:31.163222   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:39:31.186170   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:39:31.186201   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0115 10:39:31.186212   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0115 10:39:31.186222   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0115 10:39:31.186231   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0115 10:39:31.186242   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0115 10:39:31.186252   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0115 10:39:31.186263   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:39:31.186276   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0115 10:39:31.186284   46388 system_pods.go:74] duration metric: took 23.040188ms to wait for pod list to return data ...
	I0115 10:39:31.186292   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:39:31.215529   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:39:31.215567   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:39:31.215584   46388 node_conditions.go:105] duration metric: took 29.283674ms to run NodePressure ...
	I0115 10:39:31.215615   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0115 10:39:31.584238   46388 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590655   46388 kubeadm.go:787] kubelet initialised
	I0115 10:39:31.590679   46388 kubeadm.go:788] duration metric: took 6.418412ms waiting for restarted kubelet to initialise ...
	I0115 10:39:31.590688   46388 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:31.603892   46388 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.612449   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612484   46388 pod_ready.go:81] duration metric: took 8.567896ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.612497   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "coredns-76f75df574-ft2wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.612507   46388 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.622651   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622678   46388 pod_ready.go:81] duration metric: took 10.161967ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.622690   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "etcd-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.622698   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.633893   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633917   46388 pod_ready.go:81] duration metric: took 11.210807ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.633929   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-apiserver-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.633937   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.639395   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639423   46388 pod_ready.go:81] duration metric: took 5.474128ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.639434   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.639442   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:31.989202   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989242   46388 pod_ready.go:81] duration metric: took 349.786667ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:31.989255   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-proxy-nlk2h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:31.989264   46388 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.387200   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387227   46388 pod_ready.go:81] duration metric: took 397.955919ms waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.387236   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "kube-scheduler-no-preload-824502" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.387243   46388 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:32.789213   46388 pod_ready.go:97] node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789235   46388 pod_ready.go:81] duration metric: took 401.985079ms waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:39:32.789245   46388 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-824502" hosting pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:32.789252   46388 pod_ready.go:38] duration metric: took 1.198551697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:32.789271   46388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:39:32.802883   46388 ops.go:34] apiserver oom_adj: -16
	I0115 10:39:32.802901   46388 kubeadm.go:640] restartCluster took 21.083448836s
	I0115 10:39:32.802908   46388 kubeadm.go:406] StartCluster complete in 21.133905255s
	I0115 10:39:32.802921   46388 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.802997   46388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:39:32.804628   46388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:39:32.804880   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:39:32.804990   46388 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:39:32.805075   46388 addons.go:69] Setting storage-provisioner=true in profile "no-preload-824502"
	I0115 10:39:32.805094   46388 addons.go:234] Setting addon storage-provisioner=true in "no-preload-824502"
	W0115 10:39:32.805102   46388 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:39:32.805108   46388 addons.go:69] Setting default-storageclass=true in profile "no-preload-824502"
	I0115 10:39:32.805128   46388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-824502"
	I0115 10:39:32.805128   46388 config.go:182] Loaded profile config "no-preload-824502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 10:39:32.805137   46388 addons.go:69] Setting metrics-server=true in profile "no-preload-824502"
	I0115 10:39:32.805165   46388 addons.go:234] Setting addon metrics-server=true in "no-preload-824502"
	I0115 10:39:32.805172   46388 host.go:66] Checking if "no-preload-824502" exists ...
	W0115 10:39:32.805175   46388 addons.go:243] addon metrics-server should already be in state true
	I0115 10:39:32.805219   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.805564   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805565   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805597   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.805602   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805616   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.805698   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.809596   46388 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-824502" context rescaled to 1 replicas
	I0115 10:39:32.809632   46388 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:39:32.812135   46388 out.go:177] * Verifying Kubernetes components...
	I0115 10:39:32.814392   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:39:32.823244   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	I0115 10:39:32.823758   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.823864   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0115 10:39:32.824287   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824306   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.824351   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.824693   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.824816   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.824833   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.824857   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.825184   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.825778   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.825823   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.827847   46388 addons.go:234] Setting addon default-storageclass=true in "no-preload-824502"
	W0115 10:39:32.827864   46388 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:39:32.827883   46388 host.go:66] Checking if "no-preload-824502" exists ...
	I0115 10:39:32.828242   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.828286   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.838537   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0115 10:39:32.839162   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.839727   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.839747   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.841293   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.841862   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.841899   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.844309   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0115 10:39:32.844407   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32997
	I0115 10:39:32.844654   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.844941   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.845132   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845156   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.845712   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.845881   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.845894   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.846316   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:39:32.846347   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:39:32.846910   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.847189   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.849126   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.851699   46388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:39:32.853268   46388 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:32.853284   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:39:32.853305   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.855997   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856372   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.856394   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.856569   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.856716   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.856853   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.856975   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.861396   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I0115 10:39:32.861893   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.862379   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.862409   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.862874   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.863050   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.864195   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37983
	I0115 10:39:32.864480   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.866714   46388 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:39:32.864849   46388 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:39:32.868242   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:39:32.868256   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:39:32.868274   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.868596   46388 main.go:141] libmachine: Using API Version  1
	I0115 10:39:32.868613   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:39:32.869057   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:39:32.869306   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetState
	I0115 10:39:32.870918   46388 main.go:141] libmachine: (no-preload-824502) Calling .DriverName
	I0115 10:39:32.871163   46388 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:32.871177   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:39:32.871192   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHHostname
	I0115 10:39:32.871252   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871670   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.871691   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.871958   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.872127   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.872288   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.872463   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.874381   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875287   46388 main.go:141] libmachine: (no-preload-824502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:ab:81", ip: ""} in network mk-no-preload-824502: {Iface:virbr3 ExpiryTime:2024-01-15 11:38:44 +0000 UTC Type:0 Mac:52:54:00:e7:ab:81 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:no-preload-824502 Clientid:01:52:54:00:e7:ab:81}
	I0115 10:39:32.875314   46388 main.go:141] libmachine: (no-preload-824502) DBG | domain no-preload-824502 has defined IP address 192.168.50.136 and MAC address 52:54:00:e7:ab:81 in network mk-no-preload-824502
	I0115 10:39:32.875478   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHPort
	I0115 10:39:32.875624   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHKeyPath
	I0115 10:39:32.875786   46388 main.go:141] libmachine: (no-preload-824502) Calling .GetSSHUsername
	I0115 10:39:32.875893   46388 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/no-preload-824502/id_rsa Username:docker}
	I0115 10:39:32.982357   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:39:33.059016   46388 node_ready.go:35] waiting up to 6m0s for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:33.059259   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:39:33.059281   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:39:33.060796   46388 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0115 10:39:33.060983   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:39:33.110608   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:39:33.110633   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:39:33.154857   46388 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:33.154886   46388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:39:33.198495   46388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:39:34.178167   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.117123302s)
	I0115 10:39:34.178220   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178234   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178312   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.19592253s)
	I0115 10:39:34.178359   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178372   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178649   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178669   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.178687   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178712   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178723   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178735   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178691   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.178800   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.178811   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.178823   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.178982   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179001   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.179003   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179040   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.179057   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.179075   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.186855   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.186875   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.187114   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.187135   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.187154   46388 main.go:141] libmachine: (no-preload-824502) DBG | Closing plugin on server side
	I0115 10:39:34.293778   46388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095231157s)
	I0115 10:39:34.293837   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.293861   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294161   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294184   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294194   46388 main.go:141] libmachine: Making call to close driver server
	I0115 10:39:34.294203   46388 main.go:141] libmachine: (no-preload-824502) Calling .Close
	I0115 10:39:34.294451   46388 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:39:34.294475   46388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:39:34.294487   46388 addons.go:470] Verifying addon metrics-server=true in "no-preload-824502"
	I0115 10:39:34.296653   46388 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:39:29.687541   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:31.689881   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.692248   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.298179   46388 addons.go:505] enable addons completed in 1.493195038s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:39:31.842086   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:33.843601   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:32.775651   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:34.778997   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:36.186700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.688932   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:35.063999   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:37.068802   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:39.564287   46388 node_ready.go:58] node "no-preload-824502" has status "Ready":"False"
	I0115 10:39:36.341901   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:38.344615   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:37.278252   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:39.780035   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:41.186854   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.687410   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:40.063481   46388 node_ready.go:49] node "no-preload-824502" has status "Ready":"True"
	I0115 10:39:40.063509   46388 node_ready.go:38] duration metric: took 7.00445832s waiting for node "no-preload-824502" to be "Ready" ...
	I0115 10:39:40.063521   46388 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:39:40.069733   46388 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077511   46388 pod_ready.go:92] pod "coredns-76f75df574-ft2wt" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.077539   46388 pod_ready.go:81] duration metric: took 7.783253ms waiting for pod "coredns-76f75df574-ft2wt" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.077549   46388 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082665   46388 pod_ready.go:92] pod "etcd-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.082693   46388 pod_ready.go:81] duration metric: took 5.137636ms waiting for pod "etcd-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.082704   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087534   46388 pod_ready.go:92] pod "kube-apiserver-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.087552   46388 pod_ready.go:81] duration metric: took 4.840583ms waiting for pod "kube-apiserver-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.087563   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092447   46388 pod_ready.go:92] pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.092473   46388 pod_ready.go:81] duration metric: took 4.90114ms waiting for pod "kube-controller-manager-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.092493   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464047   46388 pod_ready.go:92] pod "kube-proxy-nlk2h" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:40.464065   46388 pod_ready.go:81] duration metric: took 371.565815ms waiting for pod "kube-proxy-nlk2h" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.464075   46388 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:42.472255   46388 pod_ready.go:102] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:43.471011   46388 pod_ready.go:92] pod "kube-scheduler-no-preload-824502" in "kube-system" namespace has status "Ready":"True"
	I0115 10:39:43.471033   46388 pod_ready.go:81] duration metric: took 3.006951578s waiting for pod "kube-scheduler-no-preload-824502" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:43.471045   46388 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	I0115 10:39:40.841668   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.842151   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:42.277636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:44.787510   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:46.187891   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:48.687578   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.478255   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.978120   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:45.340455   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.341486   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.840829   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:47.275430   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.776946   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.188236   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.686748   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:49.980682   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:52.479488   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.840971   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:53.841513   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:51.778023   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.275602   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:55.687892   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.186665   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:54.978059   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.978213   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.978881   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.341772   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:58.841021   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:56.775700   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:39:59.274671   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:01.280895   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.186976   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:02.688712   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.978942   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.482480   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:00.841912   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.340823   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:03.775015   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.776664   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.185744   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.185877   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:09.187192   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.979141   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.479235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:05.840997   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:07.842100   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:08.278110   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.775278   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:11.686672   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.187037   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.978475   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.978621   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:10.346343   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:12.841357   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.841981   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:13.278313   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:15.777340   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.188343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.687840   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:14.979177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:16.981550   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.478364   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:17.340973   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:19.341317   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:18.275525   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:20.277493   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.187342   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.693743   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.480386   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.481947   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:21.341650   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:23.841949   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:22.777674   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.273859   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:26.186846   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.188206   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.978266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.979824   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:25.842629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:28.341954   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:27.274109   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:29.275517   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:31.277396   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.688520   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.187343   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.478712   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:32.978549   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:30.843559   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.340435   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:33.278639   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.777051   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.186611   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:34.978720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:37.488790   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:35.841994   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.340074   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:38.278319   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.776206   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:39.978911   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.478331   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.187741   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.687320   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:40.340766   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.341909   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.843116   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:42.777726   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.777953   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:45.188685   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.687270   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:44.978841   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.477932   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.478482   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.340237   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.341936   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:47.275247   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.777753   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:49.688548   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.187385   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.188261   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.478562   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.978677   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:51.840537   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:53.842188   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:52.278594   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:54.774847   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.687614   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:59.186203   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.479325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.979266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.340295   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.342857   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:56.776968   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:40:58.777421   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.278730   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.186645   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.187583   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:01.478127   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.478816   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:00.841474   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.340255   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:03.775648   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.779261   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.687557   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.688081   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.979671   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.478240   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:05.345230   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:07.841561   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:09.841629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:08.275641   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.276466   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.187771   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.688852   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:10.478832   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.978808   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:11.841717   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.341355   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:12.775133   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.274677   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:15.186001   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.186387   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.186931   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:14.979099   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.478539   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:16.841294   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:18.842244   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:17.776623   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:20.274196   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.187095   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.689700   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:19.978471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.478169   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.479319   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:21.341851   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:23.343663   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:22.275134   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:24.276420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.185307   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.186549   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.978977   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.979239   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:25.840539   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:27.840819   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:29.842580   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:26.775069   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:28.775244   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.275239   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:30.187482   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.687454   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:31.478330   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.479265   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:32.340974   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.342201   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:33.275561   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.775652   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:34.687487   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.689628   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:39.186244   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:35.979235   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.981609   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:36.342452   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:38.841213   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:37.775893   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.274573   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.186313   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.687042   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:40.478993   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.479953   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:41.341359   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:43.840325   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:42.775636   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.275821   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.687911   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.186598   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:44.977946   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:46.980471   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.477591   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:45.841849   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:48.341443   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:47.276441   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:49.775182   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.687273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.187451   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.480325   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.979440   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:50.841657   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:53.341257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:51.776199   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:54.274920   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.188121   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.191970   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.478903   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:58.979288   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:55.341479   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:57.841144   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.841215   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:56.775625   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.276127   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:41:59.687860   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:02.188506   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.480582   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:03.977715   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.841608   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.340546   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:01.775220   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.274050   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.277327   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:04.688269   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.187187   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:05.977760   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:07.978356   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:06.340629   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.341333   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:08.775130   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.776410   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.686836   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.187035   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.187814   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:09.978478   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.477854   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.477883   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:10.341625   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:12.841300   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:14.842745   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:13.276029   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:15.774949   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.686998   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.689531   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.478177   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:18.978154   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:16.844053   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:19.339915   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:17.775988   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:20.276213   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.187144   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.188273   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.479275   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.977720   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:21.342019   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:23.343747   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:22.775222   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.274922   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.688162   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.186701   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.979093   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.478022   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:25.843596   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:28.340257   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:27.275420   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:29.275918   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:31.276702   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.186796   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.686406   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.478933   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.978757   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:30.341780   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:32.842117   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:33.774432   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.775822   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:34.687304   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:36.687850   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.187956   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.478261   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.978198   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:35.341314   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:37.840626   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.842475   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:38.275042   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:40.774892   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.686479   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.688800   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:39.980119   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:42.478070   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.478709   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:41.844661   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:44.340617   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:43.278574   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:45.775324   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.185760   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.186399   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.479381   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:48.979086   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:46.842369   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:49.341153   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:47.776338   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.275329   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:50.187219   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.687370   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.479573   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.978568   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:51.840818   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:53.842279   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:52.776812   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:54.780747   46584 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.187111   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:57.187263   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.478479   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.977687   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:55.846775   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:58.340913   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:42:56.768584   46584 pod_ready.go:81] duration metric: took 4m0.001000825s waiting for pod "metrics-server-57f55c9bc5-wxclh" in "kube-system" namespace to be "Ready" ...
	E0115 10:42:56.768615   46584 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:42:56.768623   46584 pod_ready.go:38] duration metric: took 4m9.613401399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:42:56.768641   46584 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:42:56.768686   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:42:56.768739   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:42:56.842276   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:56.842298   46584 cri.go:89] found id: ""
	I0115 10:42:56.842309   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:42:56.842361   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.847060   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:42:56.847118   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:42:56.887059   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:56.887092   46584 cri.go:89] found id: ""
	I0115 10:42:56.887100   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:42:56.887158   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.893238   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:42:56.893289   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:42:56.933564   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:56.933593   46584 cri.go:89] found id: ""
	I0115 10:42:56.933603   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:42:56.933657   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.937882   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:42:56.937958   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:42:56.980953   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:56.980989   46584 cri.go:89] found id: ""
	I0115 10:42:56.980999   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:42:56.981051   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:56.985008   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:42:56.985058   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:42:57.026275   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:57.026305   46584 cri.go:89] found id: ""
	I0115 10:42:57.026315   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:42:57.026373   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.030799   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:42:57.030885   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:42:57.071391   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:42:57.071416   46584 cri.go:89] found id: ""
	I0115 10:42:57.071424   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:42:57.071485   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.076203   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:42:57.076254   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:42:57.119035   46584 cri.go:89] found id: ""
	I0115 10:42:57.119062   46584 logs.go:284] 0 containers: []
	W0115 10:42:57.119069   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:42:57.119074   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:42:57.119129   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:42:57.167335   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:57.167355   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:57.167360   46584 cri.go:89] found id: ""
	I0115 10:42:57.167367   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:42:57.167411   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.171919   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:42:57.176255   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:42:57.176284   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:42:57.328501   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:42:57.328538   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:42:57.390279   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:42:57.390309   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:57.886607   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:42:57.886645   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:42:57.937391   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:42:57.937420   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:42:58.001313   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:42:58.001348   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:42:58.016772   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:42:58.016804   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:42:58.060489   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:42:58.060516   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:42:58.102993   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:42:58.103043   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:42:58.140732   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:42:58.140764   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:42:58.191891   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:42:58.191927   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:42:58.235836   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:42:58.235861   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:42:58.277424   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:42:58.277465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:00.844771   46584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:00.862922   46584 api_server.go:72] duration metric: took 4m17.850865s to wait for apiserver process to appear ...
	I0115 10:43:00.862946   46584 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:00.862992   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:00.863055   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:00.909986   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:00.910013   46584 cri.go:89] found id: ""
	I0115 10:43:00.910020   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:00.910066   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.915553   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:00.915634   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:00.969923   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:00.969951   46584 cri.go:89] found id: ""
	I0115 10:43:00.969961   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:00.970021   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:00.974739   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:00.974805   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:01.024283   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.024305   46584 cri.go:89] found id: ""
	I0115 10:43:01.024314   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:01.024366   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.029325   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:01.029388   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:01.070719   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.070746   46584 cri.go:89] found id: ""
	I0115 10:43:01.070755   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:01.070806   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.074906   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:01.074969   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:01.111715   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.111747   46584 cri.go:89] found id: ""
	I0115 10:43:01.111756   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:01.111805   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.116173   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:01.116225   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:01.157760   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.157791   46584 cri.go:89] found id: ""
	I0115 10:43:01.157802   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:01.157866   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.161944   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:01.162010   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:01.201888   46584 cri.go:89] found id: ""
	I0115 10:43:01.201915   46584 logs.go:284] 0 containers: []
	W0115 10:43:01.201925   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:01.201932   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:01.201990   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:01.244319   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.244346   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.244352   46584 cri.go:89] found id: ""
	I0115 10:43:01.244361   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:01.244454   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.248831   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:01.253617   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:01.253643   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:01.309426   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:01.309465   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:01.346755   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:01.346789   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:01.385238   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:01.385266   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:01.423907   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:01.423941   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:01.480867   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:01.480902   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:01.538367   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:01.538403   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:01.580240   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:01.580273   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:01.622561   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:01.622602   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:01.675436   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:01.675463   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:42:59.687714   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.186463   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.982902   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:03.478178   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:00.840619   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.841154   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:04.842905   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:02.080545   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:02.080578   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:02.144713   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:02.144756   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:02.160120   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:02.160147   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:04.776113   46584 api_server.go:253] Checking apiserver healthz at https://192.168.72.222:8443/healthz ...
	I0115 10:43:04.782741   46584 api_server.go:279] https://192.168.72.222:8443/healthz returned 200:
	ok
	I0115 10:43:04.783959   46584 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:04.783979   46584 api_server.go:131] duration metric: took 3.92102734s to wait for apiserver health ...
	I0115 10:43:04.783986   46584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:04.784019   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:04.784071   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:04.832660   46584 cri.go:89] found id: "4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:04.832685   46584 cri.go:89] found id: ""
	I0115 10:43:04.832695   46584 logs.go:284] 1 containers: [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d]
	I0115 10:43:04.832750   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.836959   46584 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:04.837009   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:04.878083   46584 cri.go:89] found id: "30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:04.878103   46584 cri.go:89] found id: ""
	I0115 10:43:04.878110   46584 logs.go:284] 1 containers: [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5]
	I0115 10:43:04.878160   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.882581   46584 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:04.882642   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:04.927778   46584 cri.go:89] found id: "36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:04.927798   46584 cri.go:89] found id: ""
	I0115 10:43:04.927805   46584 logs.go:284] 1 containers: [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2]
	I0115 10:43:04.927848   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.932822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:04.932891   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:04.975930   46584 cri.go:89] found id: "fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:04.975955   46584 cri.go:89] found id: ""
	I0115 10:43:04.975965   46584 logs.go:284] 1 containers: [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b]
	I0115 10:43:04.976010   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:04.980744   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:04.980803   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:05.024300   46584 cri.go:89] found id: "6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.024325   46584 cri.go:89] found id: ""
	I0115 10:43:05.024332   46584 logs.go:284] 1 containers: [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f]
	I0115 10:43:05.024383   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.029091   46584 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:05.029159   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:05.081239   46584 cri.go:89] found id: "4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.081264   46584 cri.go:89] found id: ""
	I0115 10:43:05.081273   46584 logs.go:284] 1 containers: [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc]
	I0115 10:43:05.081332   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.085822   46584 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:05.085879   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:05.126839   46584 cri.go:89] found id: ""
	I0115 10:43:05.126884   46584 logs.go:284] 0 containers: []
	W0115 10:43:05.126896   46584 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:05.126903   46584 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:05.126963   46584 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:05.168241   46584 cri.go:89] found id: "111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.168269   46584 cri.go:89] found id: "6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.168276   46584 cri.go:89] found id: ""
	I0115 10:43:05.168285   46584 logs.go:284] 2 containers: [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c]
	I0115 10:43:05.168343   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.173309   46584 ssh_runner.go:195] Run: which crictl
	I0115 10:43:05.177144   46584 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:05.177164   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:05.239116   46584 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:05.239148   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:05.368712   46584 logs.go:123] Gathering logs for kube-apiserver [4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d] ...
	I0115 10:43:05.368745   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dcae24d7ff7b9d7394eeebed626483e5d99439aa0b05426066e6d8d93ea575d"
	I0115 10:43:05.429504   46584 logs.go:123] Gathering logs for coredns [36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2] ...
	I0115 10:43:05.429540   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36c07653904864d30003a97347c33e5a460f1cab3314bdc6a5adcdf8342688e2"
	I0115 10:43:05.473181   46584 logs.go:123] Gathering logs for storage-provisioner [111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b] ...
	I0115 10:43:05.473216   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 111601a6dd351ca1c5569ce63479474218310ceec342d624efd44734874dcf9b"
	I0115 10:43:05.510948   46584 logs.go:123] Gathering logs for storage-provisioner [6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c] ...
	I0115 10:43:05.510974   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6abb26467c97103f3cf1db368dda5a85e703e8a0833b5044b513fe68ce6a9c0c"
	I0115 10:43:05.551052   46584 logs.go:123] Gathering logs for kube-controller-manager [4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc] ...
	I0115 10:43:05.551082   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4095240514ca1d240782d163528bebe054ad15c0bf402da62619d222638d5fdc"
	I0115 10:43:05.606711   46584 logs.go:123] Gathering logs for container status ...
	I0115 10:43:05.606746   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:05.661634   46584 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:05.661663   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:05.675627   46584 logs.go:123] Gathering logs for etcd [30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5] ...
	I0115 10:43:05.675656   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a66dab34a57d0a947a1123c38f2156353122ebe66612d43388f95bf3a554e5"
	I0115 10:43:05.736266   46584 logs.go:123] Gathering logs for kube-proxy [6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f] ...
	I0115 10:43:05.736305   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f792de826409e732039b2852698329fb98511bd09d19751654cc849efa4164f"
	I0115 10:43:05.775567   46584 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:05.775597   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:06.111495   46584 logs.go:123] Gathering logs for kube-scheduler [fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b] ...
	I0115 10:43:06.111531   46584 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8643f05eca80c1222cd8c7c677068a4196123a544e2fc855f0a26a663a1b9b"
	I0115 10:43:08.661238   46584 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:08.661275   46584 system_pods.go:61] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.661282   46584 system_pods.go:61] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.661288   46584 system_pods.go:61] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.661294   46584 system_pods.go:61] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.661300   46584 system_pods.go:61] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.661306   46584 system_pods.go:61] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.661316   46584 system_pods.go:61] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.661324   46584 system_pods.go:61] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.661335   46584 system_pods.go:74] duration metric: took 3.877343546s to wait for pod list to return data ...
	I0115 10:43:08.661342   46584 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:08.664367   46584 default_sa.go:45] found service account: "default"
	I0115 10:43:08.664393   46584 default_sa.go:55] duration metric: took 3.04125ms for default service account to be created ...
	I0115 10:43:08.664408   46584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:08.672827   46584 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:08.672852   46584 system_pods.go:89] "coredns-5dd5756b68-n59ft" [34777797-e585-42b7-852f-87d8bf442f6f] Running
	I0115 10:43:08.672860   46584 system_pods.go:89] "etcd-embed-certs-781270" [fd95a593-a2c5-40fb-8186-d80d16800735] Running
	I0115 10:43:08.672867   46584 system_pods.go:89] "kube-apiserver-embed-certs-781270" [d69f130c-2120-4350-bb02-f88ff689a53a] Running
	I0115 10:43:08.672873   46584 system_pods.go:89] "kube-controller-manager-embed-certs-781270" [d0c86ce5-79af-430d-b0a9-1a9e4e5953df] Running
	I0115 10:43:08.672879   46584 system_pods.go:89] "kube-proxy-jqgfc" [a0df28b2-1ce0-40c7-b9aa-d56862f39034] Running
	I0115 10:43:08.672885   46584 system_pods.go:89] "kube-scheduler-embed-certs-781270" [9ca77b9b-651d-4634-afa0-8130170ed7c5] Running
	I0115 10:43:08.672895   46584 system_pods.go:89] "metrics-server-57f55c9bc5-wxclh" [2a52a963-a5dd-4ead-8da3-0d502c2c96ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:08.672906   46584 system_pods.go:89] "storage-provisioner" [f13c7475-31d6-4aec-9905-070fafc63afa] Running
	I0115 10:43:08.672920   46584 system_pods.go:126] duration metric: took 8.505614ms to wait for k8s-apps to be running ...
	I0115 10:43:08.672933   46584 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:08.672984   46584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:08.690592   46584 system_svc.go:56] duration metric: took 17.651896ms WaitForService to wait for kubelet.
	I0115 10:43:08.690618   46584 kubeadm.go:581] duration metric: took 4m25.678563679s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:08.690640   46584 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:08.694652   46584 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:08.694679   46584 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:08.694692   46584 node_conditions.go:105] duration metric: took 4.045505ms to run NodePressure ...
	I0115 10:43:08.694705   46584 start.go:228] waiting for startup goroutines ...
	I0115 10:43:08.694713   46584 start.go:233] waiting for cluster config update ...
	I0115 10:43:08.694725   46584 start.go:242] writing updated cluster config ...
	I0115 10:43:08.694991   46584 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:08.747501   46584 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:08.750319   46584 out.go:177] * Done! kubectl is now configured to use "embed-certs-781270" cluster and "default" namespace by default
	I0115 10:43:04.686284   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:06.703127   46387 pod_ready.go:102] pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.180590   46387 pod_ready.go:81] duration metric: took 4m0.000776944s waiting for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:07.180624   46387 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-qq58p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0115 10:43:07.180644   46387 pod_ready.go:38] duration metric: took 4m1.198895448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:07.180669   46387 kubeadm.go:640] restartCluster took 5m11.875261334s
	W0115 10:43:07.180729   46387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0115 10:43:07.180765   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0115 10:43:05.479764   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.978536   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:07.343529   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841510   47063 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.841533   47063 pod_ready.go:81] duration metric: took 4m0.007868879s waiting for pod "metrics-server-57f55c9bc5-qpb25" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:09.841542   47063 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:09.841549   47063 pod_ready.go:38] duration metric: took 4m2.808610487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:09.841562   47063 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:09.841584   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:09.841625   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:12.165729   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.984931075s)
	I0115 10:43:12.165790   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:12.178710   46387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:43:12.188911   46387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:43:12.199329   46387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:43:12.199377   46387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0115 10:43:12.411245   46387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 10:43:09.980448   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:12.478625   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:14.479234   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:09.904898   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:09.904921   47063 cri.go:89] found id: ""
	I0115 10:43:09.904930   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:09.904996   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.911493   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:09.911557   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:09.958040   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:09.958060   47063 cri.go:89] found id: ""
	I0115 10:43:09.958070   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:09.958122   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:09.962914   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:09.962972   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:10.033848   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:10.033875   47063 cri.go:89] found id: ""
	I0115 10:43:10.033885   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:10.033946   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.043173   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:10.043232   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:10.088380   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:10.088405   47063 cri.go:89] found id: ""
	I0115 10:43:10.088415   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:10.088478   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.094288   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:10.094350   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:10.145428   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:10.145453   47063 cri.go:89] found id: ""
	I0115 10:43:10.145463   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:10.145547   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.150557   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:10.150637   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:10.206875   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:10.206901   47063 cri.go:89] found id: ""
	I0115 10:43:10.206915   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:10.206971   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.211979   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:10.212039   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:10.260892   47063 cri.go:89] found id: ""
	I0115 10:43:10.260914   47063 logs.go:284] 0 containers: []
	W0115 10:43:10.260924   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:10.260936   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:10.260987   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:10.315938   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.315970   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:10.315978   47063 cri.go:89] found id: ""
	I0115 10:43:10.315987   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:10.316045   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.324077   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:10.332727   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:10.332756   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:10.376006   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:10.376034   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:10.967301   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:10.967337   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:11.033301   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:11.033327   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:11.091151   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:11.091184   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:11.145411   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:11.145447   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:11.194249   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:11.194274   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:11.373988   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:11.374020   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:11.442754   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:11.442788   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:11.486282   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:11.486315   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:11.547428   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:11.547464   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:11.560977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:11.561005   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:11.603150   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:11.603179   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.149324   47063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:14.166360   47063 api_server.go:72] duration metric: took 4m14.983478755s to wait for apiserver process to appear ...
	I0115 10:43:14.166391   47063 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:14.166444   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:14.166504   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:14.211924   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:14.211950   47063 cri.go:89] found id: ""
	I0115 10:43:14.211961   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:14.212018   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.216288   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:14.216352   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:14.264237   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:14.264270   47063 cri.go:89] found id: ""
	I0115 10:43:14.264280   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:14.264338   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.268883   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:14.268947   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:14.329606   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:14.329631   47063 cri.go:89] found id: ""
	I0115 10:43:14.329639   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:14.329694   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.334069   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:14.334133   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:14.374753   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.374779   47063 cri.go:89] found id: ""
	I0115 10:43:14.374788   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:14.374842   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.380452   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:14.380529   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:14.422341   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:14.422371   47063 cri.go:89] found id: ""
	I0115 10:43:14.422380   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:14.422444   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.427106   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:14.427169   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:14.469410   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:14.469440   47063 cri.go:89] found id: ""
	I0115 10:43:14.469450   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:14.469511   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.475098   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:14.475216   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:14.533771   47063 cri.go:89] found id: ""
	I0115 10:43:14.533794   47063 logs.go:284] 0 containers: []
	W0115 10:43:14.533800   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:14.533805   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:14.533876   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:14.573458   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:14.573483   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:14.573490   47063 cri.go:89] found id: ""
	I0115 10:43:14.573498   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:14.573561   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.578186   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:14.583133   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:14.583157   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:14.631142   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:14.631180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:16.978406   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:18.979879   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:15.076904   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:15.076958   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:15.129739   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:15.129778   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:15.169656   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:15.169685   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:15.229569   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:15.229616   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:15.293037   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:15.293075   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:15.351198   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:15.351243   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:15.394604   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:15.394642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:15.451142   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:15.451180   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:15.466108   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:15.466146   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:15.595576   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:15.595615   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:15.643711   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:15.643740   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.200861   47063 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8444/healthz ...
	I0115 10:43:18.207576   47063 api_server.go:279] https://192.168.39.125:8444/healthz returned 200:
	ok
	I0115 10:43:18.208943   47063 api_server.go:141] control plane version: v1.28.4
	I0115 10:43:18.208964   47063 api_server.go:131] duration metric: took 4.042566476s to wait for apiserver health ...
	I0115 10:43:18.208971   47063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:18.208992   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:18.209037   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:18.254324   47063 cri.go:89] found id: "9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.254353   47063 cri.go:89] found id: ""
	I0115 10:43:18.254361   47063 logs.go:284] 1 containers: [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6]
	I0115 10:43:18.254405   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.258765   47063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:18.258844   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:18.303785   47063 cri.go:89] found id: "16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.303811   47063 cri.go:89] found id: ""
	I0115 10:43:18.303820   47063 logs.go:284] 1 containers: [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8]
	I0115 10:43:18.303880   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.308940   47063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:18.309009   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:18.358850   47063 cri.go:89] found id: "d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:18.358878   47063 cri.go:89] found id: ""
	I0115 10:43:18.358888   47063 logs.go:284] 1 containers: [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a]
	I0115 10:43:18.358954   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.363588   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:18.363656   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:18.412797   47063 cri.go:89] found id: "71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.412820   47063 cri.go:89] found id: ""
	I0115 10:43:18.412828   47063 logs.go:284] 1 containers: [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022]
	I0115 10:43:18.412878   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.418704   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:18.418765   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:18.460050   47063 cri.go:89] found id: "7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:18.460074   47063 cri.go:89] found id: ""
	I0115 10:43:18.460083   47063 logs.go:284] 1 containers: [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f]
	I0115 10:43:18.460138   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.465581   47063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:18.465642   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:18.516632   47063 cri.go:89] found id: "5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:18.516656   47063 cri.go:89] found id: ""
	I0115 10:43:18.516665   47063 logs.go:284] 1 containers: [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045]
	I0115 10:43:18.516719   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.521873   47063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:18.521935   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:18.574117   47063 cri.go:89] found id: ""
	I0115 10:43:18.574145   47063 logs.go:284] 0 containers: []
	W0115 10:43:18.574154   47063 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:18.574161   47063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:18.574222   47063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:18.630561   47063 cri.go:89] found id: "ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.630593   47063 cri.go:89] found id: "9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:18.630599   47063 cri.go:89] found id: ""
	I0115 10:43:18.630606   47063 logs.go:284] 2 containers: [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9]
	I0115 10:43:18.630666   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.636059   47063 ssh_runner.go:195] Run: which crictl
	I0115 10:43:18.640707   47063 logs.go:123] Gathering logs for kube-scheduler [71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022] ...
	I0115 10:43:18.640728   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71abda814d83cbbcbf4583e99f96f71e5a941045f46256c474a29078f149f022"
	I0115 10:43:18.681635   47063 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:18.681667   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:18.803880   47063 logs.go:123] Gathering logs for kube-apiserver [9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6] ...
	I0115 10:43:18.803913   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a14416fbd45320b5b2429d9544998eb4dd0e8409bcd2209197a476eb8bcc1f6"
	I0115 10:43:18.864605   47063 logs.go:123] Gathering logs for etcd [16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8] ...
	I0115 10:43:18.864642   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16df79e79d4d9b3029d2470eb1e306eb0c38fc69599e4f32800eae003def4fd8"
	I0115 10:43:18.918210   47063 logs.go:123] Gathering logs for storage-provisioner [ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749] ...
	I0115 10:43:18.918250   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff6b807e1af7bc8000875e508ae0f3666af84b34795b5389b65057fbeb92a749"
	I0115 10:43:18.960702   47063 logs.go:123] Gathering logs for container status ...
	I0115 10:43:18.960733   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:19.013206   47063 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:19.013242   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:19.070193   47063 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:19.070230   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:19.087983   47063 logs.go:123] Gathering logs for kube-controller-manager [5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045] ...
	I0115 10:43:19.088023   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5ae904a7af1e4b64e2fcc0b50e704d51735e2f2e7b4f44c66dacb749279045"
	I0115 10:43:19.150096   47063 logs.go:123] Gathering logs for storage-provisioner [9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9] ...
	I0115 10:43:19.150132   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af5ff2ded14ae74b725b44ed3bce24a35f49e5480bf914b69dddade237c47e9"
	I0115 10:43:19.196977   47063 logs.go:123] Gathering logs for coredns [d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a] ...
	I0115 10:43:19.197006   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7bf892409a21463b7ea22e41606f99294467133a4601a01de10f5b0b276c69a"
	I0115 10:43:19.244166   47063 logs.go:123] Gathering logs for kube-proxy [7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f] ...
	I0115 10:43:19.244202   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7836dc25486755ba802a42823d33ece790c83fbc38709d4581d3767d8b51f93f"
	I0115 10:43:19.290314   47063 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:19.290349   47063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:22.182766   47063 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:22.182794   47063 system_pods.go:61] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.182801   47063 system_pods.go:61] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.182808   47063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.182814   47063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.182820   47063 system_pods.go:61] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.182826   47063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.182836   47063 system_pods.go:61] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.182848   47063 system_pods.go:61] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.182858   47063 system_pods.go:74] duration metric: took 3.973880704s to wait for pod list to return data ...
	I0115 10:43:22.182869   47063 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:22.186304   47063 default_sa.go:45] found service account: "default"
	I0115 10:43:22.186344   47063 default_sa.go:55] duration metric: took 3.464907ms for default service account to be created ...
	I0115 10:43:22.186354   47063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:22.192564   47063 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:22.192595   47063 system_pods.go:89] "coredns-5dd5756b68-dzd2f" [0d078727-4275-4308-9206-b471ce7aa586] Running
	I0115 10:43:22.192604   47063 system_pods.go:89] "etcd-default-k8s-diff-port-709012" [0b05de8c-a3b2-498c-aaa4-6b8b64b703f5] Running
	I0115 10:43:22.192611   47063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-709012" [3ed25242-f2a1-40b5-bc35-02cce9f78407] Running
	I0115 10:43:22.192620   47063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-709012" [e7be14ec-3c91-4d22-8731-fa93f842e218] Running
	I0115 10:43:22.192627   47063 system_pods.go:89] "kube-proxy-d8lcq" [9e68bc58-e11b-4534-9164-eb1b115b1721] Running
	I0115 10:43:22.192634   47063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-709012" [42633db7-7144-4f2a-9a77-774a7fa67fda] Running
	I0115 10:43:22.192644   47063 system_pods.go:89] "metrics-server-57f55c9bc5-qpb25" [3f101dc0-1411-4554-a46a-7d829f2345ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:22.192651   47063 system_pods.go:89] "storage-provisioner" [8a0c2885-50ff-40e4-bd6d-624f33f45c9c] Running
	I0115 10:43:22.192661   47063 system_pods.go:126] duration metric: took 6.301001ms to wait for k8s-apps to be running ...
	I0115 10:43:22.192669   47063 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:22.192720   47063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:22.210150   47063 system_svc.go:56] duration metric: took 17.476738ms WaitForService to wait for kubelet.
	I0115 10:43:22.210169   47063 kubeadm.go:581] duration metric: took 4m23.02729406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:22.210190   47063 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:22.214086   47063 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:22.214111   47063 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:22.214124   47063 node_conditions.go:105] duration metric: took 3.928309ms to run NodePressure ...
	I0115 10:43:22.214137   47063 start.go:228] waiting for startup goroutines ...
	I0115 10:43:22.214146   47063 start.go:233] waiting for cluster config update ...
	I0115 10:43:22.214158   47063 start.go:242] writing updated cluster config ...
	I0115 10:43:22.214394   47063 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:22.264250   47063 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:43:22.267546   47063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-709012" cluster and "default" namespace by default
	I0115 10:43:20.980266   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:23.478672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.109313   46387 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0115 10:43:26.109392   46387 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 10:43:26.109501   46387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 10:43:26.109621   46387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 10:43:26.109750   46387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 10:43:26.109926   46387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 10:43:26.110051   46387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 10:43:26.110114   46387 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0115 10:43:26.110201   46387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 10:43:26.112841   46387 out.go:204]   - Generating certificates and keys ...
	I0115 10:43:26.112937   46387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 10:43:26.113031   46387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 10:43:26.113142   46387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0115 10:43:26.113237   46387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0115 10:43:26.113336   46387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0115 10:43:26.113414   46387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0115 10:43:26.113530   46387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0115 10:43:26.113617   46387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0115 10:43:26.113717   46387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0115 10:43:26.113814   46387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0115 10:43:26.113867   46387 kubeadm.go:322] [certs] Using the existing "sa" key
	I0115 10:43:26.113959   46387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 10:43:26.114029   46387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 10:43:26.114128   46387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 10:43:26.114214   46387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 10:43:26.114289   46387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 10:43:26.114400   46387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 10:43:26.115987   46387 out.go:204]   - Booting up control plane ...
	I0115 10:43:26.116100   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 10:43:26.116240   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 10:43:26.116349   46387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 10:43:26.116476   46387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 10:43:26.116677   46387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 10:43:26.116792   46387 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.004579 seconds
	I0115 10:43:26.116908   46387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 10:43:26.117097   46387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 10:43:26.117187   46387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 10:43:26.117349   46387 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-206509 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0115 10:43:26.117437   46387 kubeadm.go:322] [bootstrap-token] Using token: zc1jed.g57dxx99f2u8lwfg
	I0115 10:43:26.118960   46387 out.go:204]   - Configuring RBAC rules ...
	I0115 10:43:26.119074   46387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 10:43:26.119258   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 10:43:26.119401   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 10:43:26.119538   46387 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 10:43:26.119657   46387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 10:43:26.119723   46387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 10:43:26.119796   46387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 10:43:26.119809   46387 kubeadm.go:322] 
	I0115 10:43:26.119857   46387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 10:43:26.119863   46387 kubeadm.go:322] 
	I0115 10:43:26.119923   46387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 10:43:26.119930   46387 kubeadm.go:322] 
	I0115 10:43:26.119950   46387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 10:43:26.120002   46387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 10:43:26.120059   46387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 10:43:26.120078   46387 kubeadm.go:322] 
	I0115 10:43:26.120120   46387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 10:43:26.120185   46387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 10:43:26.120249   46387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 10:43:26.120255   46387 kubeadm.go:322] 
	I0115 10:43:26.120359   46387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0115 10:43:26.120426   46387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 10:43:26.120433   46387 kubeadm.go:322] 
	I0115 10:43:26.120512   46387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120660   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 \
	I0115 10:43:26.120687   46387 kubeadm.go:322]     --control-plane 	  
	I0115 10:43:26.120691   46387 kubeadm.go:322] 
	I0115 10:43:26.120757   46387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 10:43:26.120763   46387 kubeadm.go:322] 
	I0115 10:43:26.120831   46387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zc1jed.g57dxx99f2u8lwfg \
	I0115 10:43:26.120969   46387 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43c3dd4442254e85dedfcd5acc974dbbd7d1fe36d408784b20e7a754ff15a9d4 
	I0115 10:43:26.120990   46387 cni.go:84] Creating CNI manager for ""
	I0115 10:43:26.121000   46387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 10:43:26.122557   46387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 10:43:25.977703   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:27.979775   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:26.123754   46387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 10:43:26.133514   46387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 10:43:26.152666   46387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:43:26.152776   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.152794   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=old-k8s-version-206509 minikube.k8s.io/updated_at=2024_01_15T10_43_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.205859   46387 ops.go:34] apiserver oom_adj: -16
	I0115 10:43:26.398371   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:26.899064   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.398532   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:27.898380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.398986   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:28.899140   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.399224   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.898397   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.399321   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:30.899035   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.398549   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:31.898547   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.399096   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:32.898492   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.399077   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:33.899311   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:34.398839   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:29.980789   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:31.981727   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.479518   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:34.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.398611   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:35.898531   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.399422   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.898569   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.399432   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:37.899380   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.399017   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:38.898561   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:39.398551   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:36.977916   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:38.978672   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:39.899402   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.398556   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:40.898384   46387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:43:41.035213   46387 kubeadm.go:1088] duration metric: took 14.882479947s to wait for elevateKubeSystemPrivileges.
	I0115 10:43:41.035251   46387 kubeadm.go:406] StartCluster complete in 5m45.791159963s
	I0115 10:43:41.035271   46387 settings.go:142] acquiring lock: {Name:mk971596dc2d183f144c2d43ad35ef59c6c9b610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.035357   46387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:43:41.037947   46387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/kubeconfig: {Name:mk52115240485faafe063c6c63a3c63940044f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:43:41.038220   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:43:41.038242   46387 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 10:43:41.038314   46387 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038317   46387 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038333   46387 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-206509"
	I0115 10:43:41.038334   46387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-206509"
	W0115 10:43:41.038341   46387 addons.go:243] addon storage-provisioner should already be in state true
	I0115 10:43:41.038389   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038388   46387 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-206509"
	I0115 10:43:41.038405   46387 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-206509"
	W0115 10:43:41.038428   46387 addons.go:243] addon metrics-server should already be in state true
	I0115 10:43:41.038446   46387 config.go:182] Loaded profile config "old-k8s-version-206509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 10:43:41.038467   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.038724   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038738   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038783   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038787   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.038815   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.038909   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.054942   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0115 10:43:41.055314   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.055844   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.055868   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.056312   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.056464   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0115 10:43:41.056853   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.056878   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.056910   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.057198   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0115 10:43:41.057317   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057341   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.057532   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.057682   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.057844   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.057955   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.057979   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.058300   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.058921   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.058952   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.061947   46387 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-206509"
	W0115 10:43:41.061973   46387 addons.go:243] addon default-storageclass should already be in state true
	I0115 10:43:41.061999   46387 host.go:66] Checking if "old-k8s-version-206509" exists ...
	I0115 10:43:41.062381   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.062405   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.075135   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33773
	I0115 10:43:41.075593   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.075704   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0115 10:43:41.076514   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.076536   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.076723   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.077196   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.077219   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.077225   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077564   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.077607   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.077723   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.080161   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.080238   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.082210   46387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:43:41.083883   46387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0115 10:43:41.085452   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:43:41.085477   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:43:41.083855   46387 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.085496   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.085496   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:43:41.085511   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.086304   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0115 10:43:41.086675   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.087100   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.087120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.087465   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.087970   46387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 10:43:41.088011   46387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 10:43:41.090492   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.091743   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092335   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092355   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092675   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.092695   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.092833   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.092969   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.093129   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.093233   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.094042   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.094209   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.094296   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.094372   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.105226   46387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0115 10:43:41.105644   46387 main.go:141] libmachine: () Calling .GetVersion
	I0115 10:43:41.106092   46387 main.go:141] libmachine: Using API Version  1
	I0115 10:43:41.106120   46387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 10:43:41.106545   46387 main.go:141] libmachine: () Calling .GetMachineName
	I0115 10:43:41.106759   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetState
	I0115 10:43:41.108735   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .DriverName
	I0115 10:43:41.109022   46387 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.109040   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:43:41.109057   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHHostname
	I0115 10:43:41.112322   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112771   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7f:eb", ip: ""} in network mk-old-k8s-version-206509: {Iface:virbr4 ExpiryTime:2024-01-15 11:37:39 +0000 UTC Type:0 Mac:52:54:00:b7:7f:eb Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:old-k8s-version-206509 Clientid:01:52:54:00:b7:7f:eb}
	I0115 10:43:41.112797   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | domain old-k8s-version-206509 has defined IP address 192.168.61.70 and MAC address 52:54:00:b7:7f:eb in network mk-old-k8s-version-206509
	I0115 10:43:41.112914   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHPort
	I0115 10:43:41.113100   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHKeyPath
	I0115 10:43:41.113279   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .GetSSHUsername
	I0115 10:43:41.113442   46387 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/old-k8s-version-206509/id_rsa Username:docker}
	I0115 10:43:41.353016   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:43:41.353038   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0115 10:43:41.357846   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:43:41.365469   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:43:41.465358   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:43:41.465379   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:43:41.532584   46387 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:41.532612   46387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:43:41.598528   46387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 10:43:41.605798   46387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-206509" context rescaled to 1 replicas
	I0115 10:43:41.605838   46387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:43:41.607901   46387 out.go:177] * Verifying Kubernetes components...
	I0115 10:43:41.609363   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:41.608778   46387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:43:42.634034   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268517129s)
	I0115 10:43:42.634071   46387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.024689682s)
	I0115 10:43:42.634090   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634095   46387 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.634103   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634046   46387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.035489058s)
	I0115 10:43:42.634140   46387 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0115 10:43:42.634200   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.276326924s)
	I0115 10:43:42.634228   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634243   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634451   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634495   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634515   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634525   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634534   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634540   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634557   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634570   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634580   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.634589   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.634896   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.634912   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.634967   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.634997   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.635008   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.656600   46387 node_ready.go:49] node "old-k8s-version-206509" has status "Ready":"True"
	I0115 10:43:42.656629   46387 node_ready.go:38] duration metric: took 22.522223ms waiting for node "old-k8s-version-206509" to be "Ready" ...
	I0115 10:43:42.656640   46387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:42.714802   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.714834   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.715273   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.715277   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.715303   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.722261   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:42.792908   46387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183451396s)
	I0115 10:43:42.792964   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.792982   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793316   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793339   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793352   46387 main.go:141] libmachine: Making call to close driver server
	I0115 10:43:42.793361   46387 main.go:141] libmachine: (old-k8s-version-206509) Calling .Close
	I0115 10:43:42.793369   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793580   46387 main.go:141] libmachine: (old-k8s-version-206509) DBG | Closing plugin on server side
	I0115 10:43:42.793625   46387 main.go:141] libmachine: Successfully made call to close driver server
	I0115 10:43:42.793638   46387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 10:43:42.793649   46387 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-206509"
	I0115 10:43:42.796113   46387 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0115 10:43:42.798128   46387 addons.go:505] enable addons completed in 1.759885904s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0115 10:43:40.979360   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477862   46388 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:43.477895   46388 pod_ready.go:81] duration metric: took 4m0.006840717s waiting for pod "metrics-server-57f55c9bc5-6tcwm" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:43.477906   46388 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0115 10:43:43.477915   46388 pod_ready.go:38] duration metric: took 4m3.414382685s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:43.477933   46388 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:43.477963   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:43.478033   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:43.533796   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:43.533825   46388 cri.go:89] found id: ""
	I0115 10:43:43.533836   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:43.533893   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.540165   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:43.540224   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:43.576831   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:43.576853   46388 cri.go:89] found id: ""
	I0115 10:43:43.576861   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:43.576922   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.581556   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:43.581616   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:43.625292   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.625315   46388 cri.go:89] found id: ""
	I0115 10:43:43.625323   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:43.625371   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.630741   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:43.630803   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:43.682511   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:43.682553   46388 cri.go:89] found id: ""
	I0115 10:43:43.682563   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:43.682621   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.688126   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:43.688194   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:43.739847   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.739866   46388 cri.go:89] found id: ""
	I0115 10:43:43.739873   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:43.739919   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.744569   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:43.744635   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:43.787603   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:43.787627   46388 cri.go:89] found id: ""
	I0115 10:43:43.787635   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:43.787676   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.792209   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:43.792271   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:43.838530   46388 cri.go:89] found id: ""
	I0115 10:43:43.838557   46388 logs.go:284] 0 containers: []
	W0115 10:43:43.838568   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:43.838576   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:43.838636   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:43.885727   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:43.885755   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:43.885761   46388 cri.go:89] found id: ""
	I0115 10:43:43.885769   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:43.885822   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.891036   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:43.895462   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:43.895493   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:43.939544   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:43.939568   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:43.985944   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:43.985973   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:44.052893   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:44.052923   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:44.116539   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:44.116569   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:44.173390   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:44.173432   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:44.194269   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:44.194295   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:44.239908   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:44.239935   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:44.729495   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:46.231080   46387 pod_ready.go:92] pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:46.231100   46387 pod_ready.go:81] duration metric: took 3.50881186s waiting for pod "coredns-5644d7b6d9-9k84f" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:46.231109   46387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:48.239378   46387 pod_ready.go:102] pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace has status "Ready":"False"
	I0115 10:43:44.737413   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:44.737445   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:44.891846   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:44.891875   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:44.951418   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:44.951453   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:45.000171   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:45.000201   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:45.041629   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:45.041657   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.586439   46388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:47.602078   46388 api_server.go:72] duration metric: took 4m14.792413378s to wait for apiserver process to appear ...
	I0115 10:43:47.602102   46388 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:47.602138   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:47.602193   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:47.646259   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:47.646283   46388 cri.go:89] found id: ""
	I0115 10:43:47.646291   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:47.646346   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.650757   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:47.650830   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:47.691688   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:47.691715   46388 cri.go:89] found id: ""
	I0115 10:43:47.691724   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:47.691777   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.696380   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:47.696467   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:47.738315   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:47.738340   46388 cri.go:89] found id: ""
	I0115 10:43:47.738349   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:47.738402   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.742810   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:47.742870   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:47.783082   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:47.783114   46388 cri.go:89] found id: ""
	I0115 10:43:47.783124   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:47.783178   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.787381   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:47.787432   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:47.832325   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:47.832353   46388 cri.go:89] found id: ""
	I0115 10:43:47.832363   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:47.832420   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.836957   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:47.837014   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:47.877146   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:47.877169   46388 cri.go:89] found id: ""
	I0115 10:43:47.877178   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:47.877231   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.881734   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:47.881782   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:47.921139   46388 cri.go:89] found id: ""
	I0115 10:43:47.921169   46388 logs.go:284] 0 containers: []
	W0115 10:43:47.921180   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:47.921188   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:47.921236   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:47.959829   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:47.959857   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:47.959864   46388 cri.go:89] found id: ""
	I0115 10:43:47.959872   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:47.959924   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.964105   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:47.968040   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:47.968059   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:48.017234   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:48.017266   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:48.073552   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:48.073583   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:48.512500   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:48.512539   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:48.564545   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:48.564578   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:48.609739   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:48.609768   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:48.654076   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:48.654106   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:48.691287   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:48.691314   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:48.739023   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:48.739063   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:48.791976   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:48.792018   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:48.808633   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:48.808659   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:48.933063   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:48.933099   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:48.974794   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:48.974825   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:49.735197   46387 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735227   46387 pod_ready.go:81] duration metric: took 3.504112323s waiting for pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace to be "Ready" ...
	E0115 10:43:49.735237   46387 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-sjhnj" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-sjhnj" not found
	I0115 10:43:49.735243   46387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740497   46387 pod_ready.go:92] pod "kube-proxy-lh96p" in "kube-system" namespace has status "Ready":"True"
	I0115 10:43:49.740515   46387 pod_ready.go:81] duration metric: took 5.267229ms waiting for pod "kube-proxy-lh96p" in "kube-system" namespace to be "Ready" ...
	I0115 10:43:49.740525   46387 pod_ready.go:38] duration metric: took 7.083874855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:43:49.740537   46387 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:43:49.740580   46387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:43:49.755697   46387 api_server.go:72] duration metric: took 8.149828702s to wait for apiserver process to appear ...
	I0115 10:43:49.755718   46387 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:43:49.755731   46387 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I0115 10:43:49.762148   46387 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I0115 10:43:49.762995   46387 api_server.go:141] control plane version: v1.16.0
	I0115 10:43:49.763013   46387 api_server.go:131] duration metric: took 7.290279ms to wait for apiserver health ...
	I0115 10:43:49.763019   46387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:49.766597   46387 system_pods.go:59] 4 kube-system pods found
	I0115 10:43:49.766615   46387 system_pods.go:61] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.766620   46387 system_pods.go:61] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.766626   46387 system_pods.go:61] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.766631   46387 system_pods.go:61] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.766637   46387 system_pods.go:74] duration metric: took 3.613036ms to wait for pod list to return data ...
	I0115 10:43:49.766642   46387 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:49.768826   46387 default_sa.go:45] found service account: "default"
	I0115 10:43:49.768844   46387 default_sa.go:55] duration metric: took 2.197235ms for default service account to be created ...
	I0115 10:43:49.768850   46387 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:49.772271   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:49.772296   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:49.772304   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:49.772314   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:49.772321   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:49.772339   46387 retry.go:31] will retry after 223.439669ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.001140   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.001165   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.001170   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.001176   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.001181   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.001198   46387 retry.go:31] will retry after 329.400473ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.335362   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.335386   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.335391   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.335398   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.335403   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.335420   46387 retry.go:31] will retry after 466.919302ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:50.806617   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:50.806643   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:50.806649   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:50.806655   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:50.806660   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:50.806678   46387 retry.go:31] will retry after 596.303035ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.407231   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:51.407257   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:51.407264   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:51.407271   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:51.407275   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:51.407292   46387 retry.go:31] will retry after 688.903723ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.102330   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.102357   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.102364   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.102374   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.102382   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.102399   46387 retry.go:31] will retry after 817.783297ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:52.925586   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:52.925612   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:52.925620   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:52.925629   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:52.925636   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:52.925658   46387 retry.go:31] will retry after 797.004884ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:53.728788   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:53.728812   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:53.728817   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:53.728823   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:53.728827   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:53.728843   46387 retry.go:31] will retry after 1.021568746s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:51.528236   46388 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0115 10:43:51.533236   46388 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0115 10:43:51.534697   46388 api_server.go:141] control plane version: v1.29.0-rc.2
	I0115 10:43:51.534714   46388 api_server.go:131] duration metric: took 3.932606059s to wait for apiserver health ...
	I0115 10:43:51.534721   46388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:43:51.534744   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 10:43:51.534796   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 10:43:51.571704   46388 cri.go:89] found id: "04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.571730   46388 cri.go:89] found id: ""
	I0115 10:43:51.571740   46388 logs.go:284] 1 containers: [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751]
	I0115 10:43:51.571793   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.576140   46388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 10:43:51.576201   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 10:43:51.614720   46388 cri.go:89] found id: "0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:51.614803   46388 cri.go:89] found id: ""
	I0115 10:43:51.614823   46388 logs.go:284] 1 containers: [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f]
	I0115 10:43:51.614909   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.620904   46388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 10:43:51.620966   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 10:43:51.659679   46388 cri.go:89] found id: "014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.659711   46388 cri.go:89] found id: ""
	I0115 10:43:51.659721   46388 logs.go:284] 1 containers: [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b]
	I0115 10:43:51.659779   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.664223   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 10:43:51.664275   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 10:43:51.701827   46388 cri.go:89] found id: "c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:51.701850   46388 cri.go:89] found id: ""
	I0115 10:43:51.701858   46388 logs.go:284] 1 containers: [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563]
	I0115 10:43:51.701915   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.707296   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 10:43:51.707354   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 10:43:51.745962   46388 cri.go:89] found id: "d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:51.745989   46388 cri.go:89] found id: ""
	I0115 10:43:51.746006   46388 logs.go:284] 1 containers: [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2]
	I0115 10:43:51.746061   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.750872   46388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 10:43:51.750942   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 10:43:51.796600   46388 cri.go:89] found id: "aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:51.796637   46388 cri.go:89] found id: ""
	I0115 10:43:51.796647   46388 logs.go:284] 1 containers: [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6]
	I0115 10:43:51.796697   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.801250   46388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 10:43:51.801321   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 10:43:51.845050   46388 cri.go:89] found id: ""
	I0115 10:43:51.845072   46388 logs.go:284] 0 containers: []
	W0115 10:43:51.845081   46388 logs.go:286] No container was found matching "kindnet"
	I0115 10:43:51.845087   46388 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0115 10:43:51.845144   46388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0115 10:43:51.880907   46388 cri.go:89] found id: "559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:51.880935   46388 cri.go:89] found id: "9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:51.880942   46388 cri.go:89] found id: ""
	I0115 10:43:51.880951   46388 logs.go:284] 2 containers: [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5]
	I0115 10:43:51.880997   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.885202   46388 ssh_runner.go:195] Run: which crictl
	I0115 10:43:51.889086   46388 logs.go:123] Gathering logs for kube-apiserver [04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751] ...
	I0115 10:43:51.889108   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04397ad49a123c51524422cafad1d90b03d40629c4d4c7b49c142cc37266b751"
	I0115 10:43:51.939740   46388 logs.go:123] Gathering logs for coredns [014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b] ...
	I0115 10:43:51.939770   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 014ec3fd018c5f32206a3cf3dd333394b81f7232b2e1115ab345480e7985be4b"
	I0115 10:43:51.977039   46388 logs.go:123] Gathering logs for kube-scheduler [c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563] ...
	I0115 10:43:51.977068   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c382ae3f75656348482b94252c66f18c1ea0185052de3d6ba0a5b37838410563"
	I0115 10:43:52.024927   46388 logs.go:123] Gathering logs for storage-provisioner [9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5] ...
	I0115 10:43:52.024960   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1cf90048e831fbf797c04686928e2413bf419c1c6877c227b526b7744eb8e5"
	I0115 10:43:52.071850   46388 logs.go:123] Gathering logs for kubelet ...
	I0115 10:43:52.071882   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 10:43:52.123313   46388 logs.go:123] Gathering logs for dmesg ...
	I0115 10:43:52.123343   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 10:43:52.137274   46388 logs.go:123] Gathering logs for describe nodes ...
	I0115 10:43:52.137297   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 10:43:52.260488   46388 logs.go:123] Gathering logs for kube-proxy [d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2] ...
	I0115 10:43:52.260525   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1d6c3b6e1b4e36be01c08d3698365f93eaff9c5d2c677c9140b352df4a9f7c2"
	I0115 10:43:52.301121   46388 logs.go:123] Gathering logs for storage-provisioner [559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432] ...
	I0115 10:43:52.301156   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 559a40ec4f19b4011bd569a74df7f18cd5c0baf1473fd54ec55004d5ebd63432"
	I0115 10:43:52.346323   46388 logs.go:123] Gathering logs for etcd [0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f] ...
	I0115 10:43:52.346349   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a1fe0047462783335041ef8bfe7bb2fb1072696a337e1cf619337488fa72f5f"
	I0115 10:43:52.402759   46388 logs.go:123] Gathering logs for kube-controller-manager [aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6] ...
	I0115 10:43:52.402788   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea55e3208ce8954137084e42c2815e67a56d72a83a48d1a2d94f66335328dc6"
	I0115 10:43:52.457075   46388 logs.go:123] Gathering logs for CRI-O ...
	I0115 10:43:52.457103   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 10:43:52.811321   46388 logs.go:123] Gathering logs for container status ...
	I0115 10:43:52.811359   46388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 10:43:55.374293   46388 system_pods.go:59] 8 kube-system pods found
	I0115 10:43:55.374327   46388 system_pods.go:61] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.374335   46388 system_pods.go:61] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.374342   46388 system_pods.go:61] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.374348   46388 system_pods.go:61] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.374354   46388 system_pods.go:61] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.374361   46388 system_pods.go:61] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.374371   46388 system_pods.go:61] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.374382   46388 system_pods.go:61] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.374394   46388 system_pods.go:74] duration metric: took 3.83966542s to wait for pod list to return data ...
	I0115 10:43:55.374407   46388 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:43:55.376812   46388 default_sa.go:45] found service account: "default"
	I0115 10:43:55.376833   46388 default_sa.go:55] duration metric: took 2.418755ms for default service account to be created ...
	I0115 10:43:55.376843   46388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:43:55.383202   46388 system_pods.go:86] 8 kube-system pods found
	I0115 10:43:55.383227   46388 system_pods.go:89] "coredns-76f75df574-ft2wt" [217729a7-bdfa-452f-8df4-5a9694ad2f02] Running
	I0115 10:43:55.383236   46388 system_pods.go:89] "etcd-no-preload-824502" [835fcfd1-8201-4c6e-b5aa-2939620cb773] Running
	I0115 10:43:55.383244   46388 system_pods.go:89] "kube-apiserver-no-preload-824502" [8ba5df63-4fc6-4580-b41c-1e8176790dee] Running
	I0115 10:43:55.383285   46388 system_pods.go:89] "kube-controller-manager-no-preload-824502" [94920782-059c-4225-8a1c-fcf6e77c0fd2] Running
	I0115 10:43:55.383297   46388 system_pods.go:89] "kube-proxy-nlk2h" [e7aa7c9c-df52-4073-a603-b283d123a230] Running
	I0115 10:43:55.383303   46388 system_pods.go:89] "kube-scheduler-no-preload-824502" [35be3f2c-773a-40b3-af43-f31529e9ebc9] Running
	I0115 10:43:55.383314   46388 system_pods.go:89] "metrics-server-57f55c9bc5-6tcwm" [1815c2ae-e5ce-4c79-9fd9-79b28c2c6780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.383325   46388 system_pods.go:89] "storage-provisioner" [b94d8b0f-d2b0-4f57-9ab7-ff90a842499d] Running
	I0115 10:43:55.383338   46388 system_pods.go:126] duration metric: took 6.489813ms to wait for k8s-apps to be running ...
	I0115 10:43:55.383349   46388 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:43:55.383401   46388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:43:55.399074   46388 system_svc.go:56] duration metric: took 15.719638ms WaitForService to wait for kubelet.
	I0115 10:43:55.399096   46388 kubeadm.go:581] duration metric: took 4m22.589439448s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:43:55.399118   46388 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:43:55.403855   46388 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:43:55.403883   46388 node_conditions.go:123] node cpu capacity is 2
	I0115 10:43:55.403896   46388 node_conditions.go:105] duration metric: took 4.771651ms to run NodePressure ...
	I0115 10:43:55.403908   46388 start.go:228] waiting for startup goroutines ...
	I0115 10:43:55.403917   46388 start.go:233] waiting for cluster config update ...
	I0115 10:43:55.403930   46388 start.go:242] writing updated cluster config ...
	I0115 10:43:55.404244   46388 ssh_runner.go:195] Run: rm -f paused
	I0115 10:43:55.453146   46388 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0115 10:43:55.455321   46388 out.go:177] * Done! kubectl is now configured to use "no-preload-824502" cluster and "default" namespace by default
	I0115 10:43:54.756077   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:54.756099   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:54.756104   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:54.756111   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:54.756116   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:54.756131   46387 retry.go:31] will retry after 1.152306172s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:55.913769   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:55.913792   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:55.913798   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:55.913804   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:55.913810   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:55.913826   46387 retry.go:31] will retry after 2.261296506s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:43:58.179679   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:43:58.179704   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:43:58.179710   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:43:58.179718   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:43:58.179722   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:43:58.179739   46387 retry.go:31] will retry after 2.012023518s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:00.197441   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:00.197471   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:00.197476   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:00.197483   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:00.197487   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:00.197505   46387 retry.go:31] will retry after 3.341619522s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:03.543730   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:03.543752   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:03.543757   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:03.543766   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:03.543771   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:03.543788   46387 retry.go:31] will retry after 2.782711895s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:06.332250   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:06.332276   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:06.332281   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:06.332288   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:06.332294   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:06.332310   46387 retry.go:31] will retry after 5.379935092s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:11.718269   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:11.718315   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:11.718324   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:11.718334   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:11.718343   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:11.718364   46387 retry.go:31] will retry after 6.238812519s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:17.963126   46387 system_pods.go:86] 4 kube-system pods found
	I0115 10:44:17.963150   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:17.963155   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:17.963162   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:17.963167   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:17.963183   46387 retry.go:31] will retry after 7.774120416s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0115 10:44:25.743164   46387 system_pods.go:86] 6 kube-system pods found
	I0115 10:44:25.743190   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:25.743196   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:25.743200   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:25.743204   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:25.743210   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:25.743214   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:25.743231   46387 retry.go:31] will retry after 8.584433466s: missing components: kube-apiserver, kube-scheduler
	I0115 10:44:34.335720   46387 system_pods.go:86] 7 kube-system pods found
	I0115 10:44:34.335751   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:34.335759   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:34.335777   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:34.335785   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:34.335793   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:34.335801   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:34.335815   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:34.335834   46387 retry.go:31] will retry after 13.073630932s: missing components: kube-apiserver
	I0115 10:44:47.415277   46387 system_pods.go:86] 8 kube-system pods found
	I0115 10:44:47.415304   46387 system_pods.go:89] "coredns-5644d7b6d9-9k84f" [2c958bfa-7681-48d0-9627-5116a30efc8b] Running
	I0115 10:44:47.415311   46387 system_pods.go:89] "etcd-old-k8s-version-206509" [4a4a10f9-f177-408e-b63f-208cb56b7603] Running
	I0115 10:44:47.415318   46387 system_pods.go:89] "kube-apiserver-old-k8s-version-206509" [e708ba3e-5deb-4b60-ab5b-52c4d671fa46] Running
	I0115 10:44:47.415326   46387 system_pods.go:89] "kube-controller-manager-old-k8s-version-206509" [d9b280c7-481c-4667-9c2a-e0014a625f80] Running
	I0115 10:44:47.415332   46387 system_pods.go:89] "kube-proxy-lh96p" [46eabc9f-7177-4a93-ab84-a131e78e1f38] Running
	I0115 10:44:47.415339   46387 system_pods.go:89] "kube-scheduler-old-k8s-version-206509" [f77ea9e8-c984-4d43-b193-2e747dc5e881] Running
	I0115 10:44:47.415349   46387 system_pods.go:89] "metrics-server-74d5856cc6-q46p8" [98c171f1-6607-4831-ba9f-92391ae2c887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 10:44:47.415355   46387 system_pods.go:89] "storage-provisioner" [312f72ca-acf5-4ff0-8444-01001f408d09] Running
	I0115 10:44:47.415371   46387 system_pods.go:126] duration metric: took 57.64651504s to wait for k8s-apps to be running ...
	I0115 10:44:47.415382   46387 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:44:47.415444   46387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:44:47.433128   46387 system_svc.go:56] duration metric: took 17.740925ms WaitForService to wait for kubelet.
	I0115 10:44:47.433150   46387 kubeadm.go:581] duration metric: took 1m5.827285253s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:44:47.433174   46387 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:44:47.435664   46387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 10:44:47.435685   46387 node_conditions.go:123] node cpu capacity is 2
	I0115 10:44:47.435695   46387 node_conditions.go:105] duration metric: took 2.516113ms to run NodePressure ...
	I0115 10:44:47.435708   46387 start.go:228] waiting for startup goroutines ...
	I0115 10:44:47.435716   46387 start.go:233] waiting for cluster config update ...
	I0115 10:44:47.435728   46387 start.go:242] writing updated cluster config ...
	I0115 10:44:47.436091   46387 ssh_runner.go:195] Run: rm -f paused
	I0115 10:44:47.492053   46387 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0115 10:44:47.494269   46387 out.go:177] 
	W0115 10:44:47.495828   46387 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0115 10:44:47.497453   46387 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0115 10:44:47.498880   46387 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-206509" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-15 10:37:38 UTC, ends at Mon 2024-01-15 10:56:34 UTC. --
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.510746992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316194510721489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0e62332b-5d90-46ca-bca5-bda4d50bab57 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.511411238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=074f29e7-1c2c-4996-84a3-c8c822aed8fd name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.511579015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=074f29e7-1c2c-4996-84a3-c8c822aed8fd name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.511814225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d,PodSandboxId:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315424375402896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{io.kubernetes.container.hash: acb66b98,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175,PodSandboxId:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705315422790243317,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,},Annotations:map[string]string{io.kubernetes.container.hash: 92f84ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e,PodSandboxId:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705315421654264904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 87015047,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94,PodSandboxId:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705315396824646086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f04082c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19,PodSandboxId:f90ec8ef16364825d107b294446779843e6380a2018d5b187d6871a9396156de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705315395215152725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5,PodSandboxId:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705315394616310709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,},Annotations:map[string]string{io.kuberne
tes.container.hash: d2d5a8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37,PodSandboxId:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705315394454545253,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[
string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=074f29e7-1c2c-4996-84a3-c8c822aed8fd name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.556223491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e0bf99b7-e446-4cd0-b39c-b7b4a5578e77 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.556334181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e0bf99b7-e446-4cd0-b39c-b7b4a5578e77 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.557535553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6238b14e-554d-436b-8b3d-2689763f4ab5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.557911850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316194557889475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6238b14e-554d-436b-8b3d-2689763f4ab5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.558347929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=802393b6-ab50-4737-8d30-0311b4f26a20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.558475169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=802393b6-ab50-4737-8d30-0311b4f26a20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.558714490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d,PodSandboxId:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315424375402896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{io.kubernetes.container.hash: acb66b98,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175,PodSandboxId:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705315422790243317,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,},Annotations:map[string]string{io.kubernetes.container.hash: 92f84ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e,PodSandboxId:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705315421654264904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 87015047,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94,PodSandboxId:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705315396824646086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f04082c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19,PodSandboxId:f90ec8ef16364825d107b294446779843e6380a2018d5b187d6871a9396156de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705315395215152725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5,PodSandboxId:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705315394616310709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,},Annotations:map[string]string{io.kuberne
tes.container.hash: d2d5a8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37,PodSandboxId:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705315394454545253,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[
string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=802393b6-ab50-4737-8d30-0311b4f26a20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.599872814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3a28368d-3e6e-48a6-8f0a-aa46999d5739 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.599957975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3a28368d-3e6e-48a6-8f0a-aa46999d5739 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.601080102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=08171feb-870f-4edd-a336-852d840e2350 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.601569499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316194601552332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=08171feb-870f-4edd-a336-852d840e2350 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.602362383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0b782352-f96c-4964-941f-8205f5ac006a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.602504553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0b782352-f96c-4964-941f-8205f5ac006a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.602767123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d,PodSandboxId:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315424375402896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{io.kubernetes.container.hash: acb66b98,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175,PodSandboxId:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705315422790243317,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,},Annotations:map[string]string{io.kubernetes.container.hash: 92f84ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e,PodSandboxId:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705315421654264904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 87015047,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94,PodSandboxId:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705315396824646086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f04082c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19,PodSandboxId:f90ec8ef16364825d107b294446779843e6380a2018d5b187d6871a9396156de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705315395215152725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5,PodSandboxId:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705315394616310709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,},Annotations:map[string]string{io.kuberne
tes.container.hash: d2d5a8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37,PodSandboxId:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705315394454545253,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[
string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0b782352-f96c-4964-941f-8205f5ac006a name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.640763131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9d004048-caca-4b98-ac37-c674ef007bb1 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.640846057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9d004048-caca-4b98-ac37-c674ef007bb1 name=/runtime.v1.RuntimeService/Version
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.642772354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=13761f52-54f2-4512-9230-199dd17fee92 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.643169974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705316194643158120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=13761f52-54f2-4512-9230-199dd17fee92 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.644015706Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c055face-310b-4b59-ad6d-f04b4b9f2a46 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.644089468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c055face-310b-4b59-ad6d-f04b4b9f2a46 name=/runtime.v1.RuntimeService/ListContainers
	Jan 15 10:56:34 old-k8s-version-206509 crio[733]: time="2024-01-15 10:56:34.644286994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d,PodSandboxId:6e72267ed704973e9f95700c0bc3ec3a3841f56d02a6bc6f4206e2d6ebfc1e79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705315424375402896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 312f72ca-acf5-4ff0-8444-01001f408d09,},Annotations:map[string]string{io.kubernetes.container.hash: acb66b98,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175,PodSandboxId:303d62fb6c36e49abaa5d090fe56f54a5f8120a286c8ec65d330ded73411bb7b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705315422790243317,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lh96p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eabc9f-7177-4a93-ab84-a131e78e1f38,},Annotations:map[string]string{io.kubernetes.container.hash: 92f84ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e,PodSandboxId:01a88be5a547c467025f11f305cda4789aba91f900fda058b22e375d3dd8a077,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705315421654264904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9k84f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c958bfa-7681-48d0-9627-5116a30efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 87015047,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94,PodSandboxId:f1d772c68201044bf94727dca79d51e69d45355f2609d520df1e6fd154646281,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705315396824646086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1211e12708de87c59f58e6cccb4974df,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f04082c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19,PodSandboxId:f90ec8ef16364825d107b294446779843e6380a2018d5b187d6871a9396156de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705315395215152725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5,PodSandboxId:31342c0e11eed9272c6d3dfef5c335da5d74e5e3d0c11cf48a1d2eff28d65c6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705315394616310709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e127aecf07397be5b721df8f3b50ed22,},Annotations:map[string]string{io.kuberne
tes.container.hash: d2d5a8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37,PodSandboxId:649e66c4c34b95c6bbf57ee34c474a07296adf2b61a1dda459a5f2dc80635830,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705315394454545253,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-206509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[
string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c055face-310b-4b59-ad6d-f04b4b9f2a46 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	274ec7c48ab7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   6e72267ed7049       storage-provisioner
	4c363f7ffd7bd       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   303d62fb6c36e       kube-proxy-lh96p
	6a694c01d0dbd       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   01a88be5a547c       coredns-5644d7b6d9-9k84f
	49abd2cf9830f       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   f1d772c682010       etcd-old-k8s-version-206509
	6e41dd19c953b       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   f90ec8ef16364       kube-scheduler-old-k8s-version-206509
	48bba9a9313b0       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            0                   31342c0e11eed       kube-apiserver-old-k8s-version-206509
	fd62511730247       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   649e66c4c34b9       kube-controller-manager-old-k8s-version-206509
	
	
	==> coredns [6a694c01d0dbd2eeb9b0c3b45ec3b48bca5dbce60050e166663ff62b0df5544e] <==
	.:53
	2024-01-15T10:43:42.442Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2024-01-15T10:43:42.442Z [INFO] CoreDNS-1.6.2
	2024-01-15T10:43:42.442Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2024-01-15T10:44:16.262Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               old-k8s-version-206509
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-206509
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=old-k8s-version-206509
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T10_43_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:43:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:56:21 +0000   Mon, 15 Jan 2024 10:43:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:56:21 +0000   Mon, 15 Jan 2024 10:43:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:56:21 +0000   Mon, 15 Jan 2024 10:43:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:56:21 +0000   Mon, 15 Jan 2024 10:43:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.70
	  Hostname:    old-k8s-version-206509
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 989244633e474b0283881692ca4b18d6
	 System UUID:                98924463-3e47-4b02-8388-1692ca4b18d6
	 Boot ID:                    65965bab-0462-4790-b60f-27d2733e1f9f
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-9k84f                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-206509                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-206509             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-206509    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-lh96p                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-206509             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-q46p8                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-206509     Node old-k8s-version-206509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-206509     Node old-k8s-version-206509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-206509     Node old-k8s-version-206509 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-206509  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan15 10:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068658] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.334851] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.367184] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147498] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.643727] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.883321] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.103317] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.153717] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.112346] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[  +0.203792] systemd-fstab-generator[717]: Ignoring "noauto" for root device
	[Jan15 10:38] systemd-fstab-generator[1039]: Ignoring "noauto" for root device
	[  +0.372683] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +26.294521] kauditd_printk_skb: 18 callbacks suppressed
	[Jan15 10:43] systemd-fstab-generator[3199]: Ignoring "noauto" for root device
	[ +28.392137] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.065809] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [49abd2cf9830f17e301f3c8f0a28baec6789a5e68f0d87a685b338c7dd1d7b94] <==
	2024-01-15 10:43:16.930888 I | raft: 29bd607c3100bf45 became follower at term 0
	2024-01-15 10:43:16.930896 I | raft: newRaft 29bd607c3100bf45 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-15 10:43:16.930900 I | raft: 29bd607c3100bf45 became follower at term 1
	2024-01-15 10:43:16.939098 W | auth: simple token is not cryptographically signed
	2024-01-15 10:43:16.943021 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-15 10:43:16.944342 I | etcdserver: 29bd607c3100bf45 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-15 10:43:16.945172 I | etcdserver/membership: added member 29bd607c3100bf45 [https://192.168.61.70:2380] to cluster c2d50656252384c
	2024-01-15 10:43:16.945633 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-15 10:43:16.945804 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-15 10:43:16.945966 I | embed: listening for metrics on http://192.168.61.70:2381
	2024-01-15 10:43:17.131375 I | raft: 29bd607c3100bf45 is starting a new election at term 1
	2024-01-15 10:43:17.131481 I | raft: 29bd607c3100bf45 became candidate at term 2
	2024-01-15 10:43:17.131495 I | raft: 29bd607c3100bf45 received MsgVoteResp from 29bd607c3100bf45 at term 2
	2024-01-15 10:43:17.131503 I | raft: 29bd607c3100bf45 became leader at term 2
	2024-01-15 10:43:17.131510 I | raft: raft.node: 29bd607c3100bf45 elected leader 29bd607c3100bf45 at term 2
	2024-01-15 10:43:17.131992 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-15 10:43:17.133534 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-15 10:43:17.134117 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-15 10:43:17.134228 I | etcdserver: published {Name:old-k8s-version-206509 ClientURLs:[https://192.168.61.70:2379]} to cluster c2d50656252384c
	2024-01-15 10:43:17.134284 I | embed: ready to serve client requests
	2024-01-15 10:43:17.134678 I | embed: ready to serve client requests
	2024-01-15 10:43:17.135783 I | embed: serving client requests on 192.168.61.70:2379
	2024-01-15 10:43:17.143783 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-15 10:53:17.664084 I | mvcc: store.index: compact 666
	2024-01-15 10:53:17.666390 I | mvcc: finished scheduled compaction at 666 (took 1.672334ms)
	
	
	==> kernel <==
	 10:56:35 up 19 min,  0 users,  load average: 0.18, 0.18, 0.17
	Linux old-k8s-version-206509 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [48bba9a9313b082628bfdb5808066b931cd01b2f2556cfd3bc243a30396797f5] <==
	I0115 10:49:22.005004       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:49:22.005224       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:49:22.005317       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:49:22.005339       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:51:22.005729       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:51:22.005899       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:51:22.005965       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:51:22.005972       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:53:22.005641       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:53:22.005798       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:53:22.005988       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:53:22.006002       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:54:22.006406       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:54:22.006700       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:54:22.006750       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:54:22.006762       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0115 10:56:22.007336       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0115 10:56:22.007500       1 handler_proxy.go:99] no RequestInfo found in the context
	E0115 10:56:22.007585       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 10:56:22.007599       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fd62511730247ab369c19a497a5809447fa21d61c60870d1546a6347a3b40d37] <==
	E0115 10:50:14.188334       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:50:36.949378       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:50:44.440001       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:51:08.951284       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:51:14.692507       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:51:40.953581       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:51:44.944589       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:52:12.955686       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:52:15.197023       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:52:44.957614       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:52:45.448994       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0115 10:53:15.701295       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:53:16.960200       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:53:45.953126       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:53:48.962846       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:54:16.205488       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:54:20.964800       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:54:46.457310       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:54:52.966582       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:55:16.709605       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:55:24.968600       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:55:46.961531       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:55:56.970714       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0115 10:56:17.213616       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0115 10:56:28.972619       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [4c363f7ffd7bdc36ebef9505894402b1eb06038578edd40d4e8bfb85785e6175] <==
	W0115 10:43:43.146772       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0115 10:43:43.156796       1 node.go:135] Successfully retrieved node IP: 192.168.61.70
	I0115 10:43:43.156910       1 server_others.go:149] Using iptables Proxier.
	I0115 10:43:43.157766       1 server.go:529] Version: v1.16.0
	I0115 10:43:43.166401       1 config.go:313] Starting service config controller
	I0115 10:43:43.166832       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0115 10:43:43.166955       1 config.go:131] Starting endpoints config controller
	I0115 10:43:43.166980       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0115 10:43:43.269537       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0115 10:43:43.269789       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [6e41dd19c953b083599b61e6a0b0dab781cbc2599a209dfbe1613415e76c0c19] <==
	W0115 10:43:20.997270       1 authentication.go:79] Authentication is disabled
	I0115 10:43:20.997293       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0115 10:43:21.002178       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0115 10:43:21.036605       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 10:43:21.056409       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 10:43:21.063134       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 10:43:21.063558       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 10:43:21.063598       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 10:43:21.064083       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 10:43:21.064113       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 10:43:21.064144       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 10:43:21.064186       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 10:43:21.066928       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 10:43:21.067679       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 10:43:22.055794       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 10:43:22.057579       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 10:43:22.065582       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 10:43:22.067732       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 10:43:22.069264       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 10:43:22.069896       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 10:43:22.071885       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 10:43:22.073165       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 10:43:22.074296       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 10:43:22.076283       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 10:43:22.078563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 10:37:38 UTC, ends at Mon 2024-01-15 10:56:35 UTC. --
	Jan 15 10:52:14 old-k8s-version-206509 kubelet[3205]: E0115 10:52:14.302577    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:52:27 old-k8s-version-206509 kubelet[3205]: E0115 10:52:27.302506    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:52:38 old-k8s-version-206509 kubelet[3205]: E0115 10:52:38.302365    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:52:50 old-k8s-version-206509 kubelet[3205]: E0115 10:52:50.302644    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:53:01 old-k8s-version-206509 kubelet[3205]: E0115 10:53:01.302130    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:53:12 old-k8s-version-206509 kubelet[3205]: E0115 10:53:12.302112    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:53:13 old-k8s-version-206509 kubelet[3205]: E0115 10:53:13.383288    3205 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 15 10:53:26 old-k8s-version-206509 kubelet[3205]: E0115 10:53:26.302257    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:53:38 old-k8s-version-206509 kubelet[3205]: E0115 10:53:38.302356    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:53:51 old-k8s-version-206509 kubelet[3205]: E0115 10:53:51.302401    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:54:03 old-k8s-version-206509 kubelet[3205]: E0115 10:54:03.303290    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:54:18 old-k8s-version-206509 kubelet[3205]: E0115 10:54:18.301846    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:54:33 old-k8s-version-206509 kubelet[3205]: E0115 10:54:33.318277    3205 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 10:54:33 old-k8s-version-206509 kubelet[3205]: E0115 10:54:33.318340    3205 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 10:54:33 old-k8s-version-206509 kubelet[3205]: E0115 10:54:33.318384    3205 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 15 10:54:33 old-k8s-version-206509 kubelet[3205]: E0115 10:54:33.318409    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 15 10:54:45 old-k8s-version-206509 kubelet[3205]: E0115 10:54:45.305015    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:54:59 old-k8s-version-206509 kubelet[3205]: E0115 10:54:59.302527    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:55:12 old-k8s-version-206509 kubelet[3205]: E0115 10:55:12.302158    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:55:26 old-k8s-version-206509 kubelet[3205]: E0115 10:55:26.301942    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:55:39 old-k8s-version-206509 kubelet[3205]: E0115 10:55:39.302197    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:55:50 old-k8s-version-206509 kubelet[3205]: E0115 10:55:50.302522    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:56:03 old-k8s-version-206509 kubelet[3205]: E0115 10:56:03.302520    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:56:14 old-k8s-version-206509 kubelet[3205]: E0115 10:56:14.302190    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 15 10:56:28 old-k8s-version-206509 kubelet[3205]: E0115 10:56:28.302267    3205 pod_workers.go:191] Error syncing pod 98c171f1-6607-4831-ba9f-92391ae2c887 ("metrics-server-74d5856cc6-q46p8_kube-system(98c171f1-6607-4831-ba9f-92391ae2c887)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [274ec7c48ab7ac60f2b8d347dd9c8c7bc7c180b908de6e8bc42c660aa3d83b0d] <==
	I0115 10:43:44.505234       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 10:43:44.514853       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 10:43:44.515108       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 10:43:44.524666       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 10:43:44.525814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-206509_0ce01bed-4171-4129-83aa-61a84703e5fc!
	I0115 10:43:44.527318       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8072bbe3-0aed-4777-89c1-3b997a5a8d93", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-206509_0ce01bed-4171-4129-83aa-61a84703e5fc became leader
	I0115 10:43:44.626506       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-206509_0ce01bed-4171-4129-83aa-61a84703e5fc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-206509 -n old-k8s-version-206509
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-206509 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-q46p8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-206509 describe pod metrics-server-74d5856cc6-q46p8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-206509 describe pod metrics-server-74d5856cc6-q46p8: exit status 1 (72.867626ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-q46p8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-206509 describe pod metrics-server-74d5856cc6-q46p8: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (165.30s)

                                                
                                    

Test pass (248/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.17
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 6.64
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.13
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 6.74
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 104.89
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 148.8
38 TestAddons/parallel/Registry 15.02
40 TestAddons/parallel/InspektorGadget 12.38
41 TestAddons/parallel/MetricsServer 6.31
42 TestAddons/parallel/HelmTiller 12.83
44 TestAddons/parallel/CSI 78.02
45 TestAddons/parallel/Headlamp 16.16
46 TestAddons/parallel/CloudSpanner 7.08
47 TestAddons/parallel/LocalPath 12.54
48 TestAddons/parallel/NvidiaDevicePlugin 6.03
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestCertOptions 95.04
55 TestCertExpiration 284.24
57 TestForceSystemdFlag 83.03
58 TestForceSystemdEnv 75.87
60 TestKVMDriverInstallOrUpdate 3.39
64 TestErrorSpam/setup 46.36
65 TestErrorSpam/start 0.36
66 TestErrorSpam/status 0.75
67 TestErrorSpam/pause 1.49
68 TestErrorSpam/unpause 1.72
69 TestErrorSpam/stop 2.25
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 99.74
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 38.05
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.01
81 TestFunctional/serial/CacheCmd/cache/add_local 1.41
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.11
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 31.04
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.5
92 TestFunctional/serial/LogsFileCmd 1.5
93 TestFunctional/serial/InvalidService 4.01
95 TestFunctional/parallel/ConfigCmd 0.44
96 TestFunctional/parallel/DashboardCmd 14.75
97 TestFunctional/parallel/DryRun 0.32
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 1.13
103 TestFunctional/parallel/ServiceCmdConnect 10.68
104 TestFunctional/parallel/AddonsCmd 0.14
105 TestFunctional/parallel/PersistentVolumeClaim 42.48
107 TestFunctional/parallel/SSHCmd 0.55
108 TestFunctional/parallel/CpCmd 1.59
109 TestFunctional/parallel/MySQL 32
110 TestFunctional/parallel/FileSync 0.26
111 TestFunctional/parallel/CertSync 1.74
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
119 TestFunctional/parallel/License 0.18
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.36
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
122 TestFunctional/parallel/MountCmd/any-port 10.83
123 TestFunctional/parallel/ProfileCmd/profile_list 0.33
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
137 TestFunctional/parallel/MountCmd/specific-port 1.98
138 TestFunctional/parallel/ServiceCmd/List 0.52
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.47
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
142 TestFunctional/parallel/ServiceCmd/Format 0.47
143 TestFunctional/parallel/ServiceCmd/URL 0.43
144 TestFunctional/parallel/Version/short 0.07
145 TestFunctional/parallel/Version/components 0.96
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
150 TestFunctional/parallel/ImageCommands/ImageBuild 2.75
151 TestFunctional/parallel/ImageCommands/Setup 0.87
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 8.53
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.02
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 11.82
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.99
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.67
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.32
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestIngressAddonLegacy/StartLegacyK8sCluster 82.89
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.94
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
172 TestJSONOutput/start/Command 100.7
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.65
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.65
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.1
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.21
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 100.4
204 TestMountStart/serial/StartWithMountFirst 28.85
205 TestMountStart/serial/VerifyMountFirst 0.41
206 TestMountStart/serial/StartWithMountSecond 26.2
207 TestMountStart/serial/VerifyMountSecond 0.41
208 TestMountStart/serial/DeleteFirst 0.87
209 TestMountStart/serial/VerifyMountPostDelete 0.41
210 TestMountStart/serial/Stop 1.14
211 TestMountStart/serial/RestartStopped 23.06
212 TestMountStart/serial/VerifyMountPostStop 0.4
215 TestMultiNode/serial/FreshStart2Nodes 108.41
216 TestMultiNode/serial/DeployApp2Nodes 4.45
218 TestMultiNode/serial/AddNode 42.99
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.22
221 TestMultiNode/serial/CopyFile 7.63
222 TestMultiNode/serial/StopNode 2.96
223 TestMultiNode/serial/StartAfterStop 30.65
225 TestMultiNode/serial/DeleteNode 1.74
227 TestMultiNode/serial/RestartMultiNode 444.53
228 TestMultiNode/serial/ValidateNameConflict 47.84
235 TestScheduledStopUnix 116.12
239 TestRunningBinaryUpgrade 159.71
241 TestKubernetesUpgrade 234.04
243 TestStoppedBinaryUpgrade/Setup 0.39
255 TestPause/serial/Start 94.8
256 TestStoppedBinaryUpgrade/Upgrade 185.58
261 TestNetworkPlugins/group/false 3.59
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
267 TestNoKubernetes/serial/StartWithK8s 119.42
268 TestPause/serial/SecondStartNoReconfiguration 41.12
269 TestNoKubernetes/serial/StartWithStopK8s 43.07
270 TestPause/serial/Pause 0.72
271 TestPause/serial/VerifyStatus 0.25
272 TestPause/serial/Unpause 0.76
273 TestPause/serial/PauseAgain 0.9
274 TestPause/serial/DeletePaused 0.82
275 TestPause/serial/VerifyDeletedResources 0.28
276 TestNoKubernetes/serial/Start 55.89
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
279 TestNoKubernetes/serial/ProfileList 30.19
280 TestNoKubernetes/serial/Stop 2.88
281 TestNoKubernetes/serial/StartNoArgs 38.5
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
284 TestStartStop/group/old-k8s-version/serial/FirstStart 213.72
286 TestStartStop/group/no-preload/serial/FirstStart 118.88
288 TestStartStop/group/embed-certs/serial/FirstStart 112.54
289 TestStartStop/group/no-preload/serial/DeployApp 9.31
290 TestStartStop/group/old-k8s-version/serial/DeployApp 9.46
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.07
295 TestStartStop/group/embed-certs/serial/DeployApp 8.29
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 99.09
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
305 TestStartStop/group/old-k8s-version/serial/SecondStart 718.25
306 TestStartStop/group/no-preload/serial/SecondStart 666.21
308 TestStartStop/group/embed-certs/serial/SecondStart 602.4
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 502.77
320 TestStartStop/group/newest-cni/serial/FirstStart 60.06
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.56
323 TestStartStop/group/newest-cni/serial/Stop 11.13
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
325 TestStartStop/group/newest-cni/serial/SecondStart 71.6
326 TestNetworkPlugins/group/auto/Start 102.94
327 TestNetworkPlugins/group/kindnet/Start 70.42
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
331 TestStartStop/group/newest-cni/serial/Pause 2.99
332 TestNetworkPlugins/group/calico/Start 94.8
333 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
334 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
335 TestNetworkPlugins/group/kindnet/NetCatPod 12.3
336 TestNetworkPlugins/group/auto/KubeletFlags 0.24
337 TestNetworkPlugins/group/auto/NetCatPod 12.25
338 TestNetworkPlugins/group/kindnet/DNS 0.18
339 TestNetworkPlugins/group/kindnet/Localhost 0.16
340 TestNetworkPlugins/group/kindnet/HairPin 0.18
341 TestNetworkPlugins/group/auto/DNS 0.18
342 TestNetworkPlugins/group/auto/Localhost 0.16
343 TestNetworkPlugins/group/auto/HairPin 0.17
344 TestNetworkPlugins/group/custom-flannel/Start 96.66
345 TestNetworkPlugins/group/enable-default-cni/Start 131.55
346 TestNetworkPlugins/group/calico/ControllerPod 6.01
347 TestNetworkPlugins/group/flannel/Start 125.51
348 TestNetworkPlugins/group/calico/KubeletFlags 0.2
349 TestNetworkPlugins/group/calico/NetCatPod 11.24
350 TestNetworkPlugins/group/calico/DNS 0.23
351 TestNetworkPlugins/group/calico/Localhost 0.28
352 TestNetworkPlugins/group/calico/HairPin 0.18
353 TestNetworkPlugins/group/bridge/Start 131.91
354 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
355 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
356 TestNetworkPlugins/group/custom-flannel/DNS 0.23
357 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
358 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
366 TestNetworkPlugins/group/flannel/NetCatPod 11.27
367 TestNetworkPlugins/group/flannel/DNS 0.19
368 TestNetworkPlugins/group/flannel/Localhost 0.14
369 TestNetworkPlugins/group/flannel/HairPin 0.15
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
371 TestNetworkPlugins/group/bridge/NetCatPod 12.25
372 TestNetworkPlugins/group/bridge/DNS 0.18
373 TestNetworkPlugins/group/bridge/Localhost 0.13
374 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.16.0/json-events (9.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-079711 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-079711 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.17257176s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-079711
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-079711: exit status 85 (71.752714ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-079711 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |          |
	|         | -p download-only-079711        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:26:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:26:28.127205   13494 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:26:28.127461   13494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:28.127469   13494 out.go:309] Setting ErrFile to fd 2...
	I0115 09:26:28.127474   13494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:28.127646   13494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	W0115 09:26:28.127755   13494 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17953-4821/.minikube/config/config.json: open /home/jenkins/minikube-integration/17953-4821/.minikube/config/config.json: no such file or directory
	I0115 09:26:28.128320   13494 out.go:303] Setting JSON to true
	I0115 09:26:28.129128   13494 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":488,"bootTime":1705310300,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:26:28.129185   13494 start.go:138] virtualization: kvm guest
	I0115 09:26:28.131775   13494 out.go:97] [download-only-079711] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:26:28.133245   13494 out.go:169] MINIKUBE_LOCATION=17953
	I0115 09:26:28.131873   13494 notify.go:220] Checking for updates...
	W0115 09:26:28.131922   13494 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball: no such file or directory
	I0115 09:26:28.135913   13494 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:26:28.137237   13494 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:26:28.138520   13494 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:26:28.139720   13494 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 09:26:28.142220   13494 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 09:26:28.142462   13494 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:26:28.244504   13494 out.go:97] Using the kvm2 driver based on user configuration
	I0115 09:26:28.244529   13494 start.go:298] selected driver: kvm2
	I0115 09:26:28.244537   13494 start.go:902] validating driver "kvm2" against <nil>
	I0115 09:26:28.244845   13494 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:26:28.244971   13494 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 09:26:28.259413   13494 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 09:26:28.259472   13494 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:26:28.259962   13494 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0115 09:26:28.260145   13494 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 09:26:28.260216   13494 cni.go:84] Creating CNI manager for ""
	I0115 09:26:28.260234   13494 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 09:26:28.260249   13494 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 09:26:28.260260   13494 start_flags.go:321] config:
	{Name:download-only-079711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-079711 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:26:28.260499   13494 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:26:28.262274   13494 out.go:97] Downloading VM boot image ...
	I0115 09:26:28.262324   13494 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17953-4821/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 09:26:30.282906   13494 out.go:97] Starting control plane node download-only-079711 in cluster download-only-079711
	I0115 09:26:30.282927   13494 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 09:26:30.310847   13494 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0115 09:26:30.310888   13494 cache.go:56] Caching tarball of preloaded images
	I0115 09:26:30.311047   13494 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 09:26:30.312674   13494 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0115 09:26:30.312693   13494 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:30.339684   13494 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-079711"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-079711
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-200610 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-200610 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.636570524s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-200610
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-200610: exit status 85 (68.125147ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-079711 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-079711        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-079711        | download-only-079711 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| start   | -o=json --download-only        | download-only-200610 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-200610        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:26:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:26:37.641471   13659 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:26:37.641713   13659 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:37.641722   13659 out.go:309] Setting ErrFile to fd 2...
	I0115 09:26:37.641727   13659 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:37.641924   13659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 09:26:37.642512   13659 out.go:303] Setting JSON to true
	I0115 09:26:37.643287   13659 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":498,"bootTime":1705310300,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:26:37.643377   13659 start.go:138] virtualization: kvm guest
	I0115 09:26:37.645774   13659 out.go:97] [download-only-200610] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:26:37.647333   13659 out.go:169] MINIKUBE_LOCATION=17953
	I0115 09:26:37.645887   13659 notify.go:220] Checking for updates...
	I0115 09:26:37.650358   13659 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:26:37.651889   13659 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:26:37.653224   13659 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:26:37.654697   13659 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 09:26:37.657399   13659 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 09:26:37.657599   13659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:26:37.689425   13659 out.go:97] Using the kvm2 driver based on user configuration
	I0115 09:26:37.689458   13659 start.go:298] selected driver: kvm2
	I0115 09:26:37.689467   13659 start.go:902] validating driver "kvm2" against <nil>
	I0115 09:26:37.689746   13659 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:26:37.689820   13659 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 09:26:37.703963   13659 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 09:26:37.704022   13659 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:26:37.704492   13659 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0115 09:26:37.704623   13659 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 09:26:37.704671   13659 cni.go:84] Creating CNI manager for ""
	I0115 09:26:37.704688   13659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 09:26:37.704701   13659 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 09:26:37.704709   13659 start_flags.go:321] config:
	{Name:download-only-200610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-200610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:26:37.704815   13659 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:26:37.706479   13659 out.go:97] Starting control plane node download-only-200610 in cluster download-only-200610
	I0115 09:26:37.706493   13659 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:26:37.736912   13659 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 09:26:37.736938   13659 cache.go:56] Caching tarball of preloaded images
	I0115 09:26:37.737068   13659 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:26:37.738940   13659 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0115 09:26:37.738959   13659 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:37.766634   13659 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 09:26:40.613997   13659 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:40.614090   13659 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:41.542603   13659 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 09:26:41.542927   13659 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/download-only-200610/config.json ...
	I0115 09:26:41.542956   13659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/download-only-200610/config.json: {Name:mkf5a39c15501fee89883ed91c280fe7a09986a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:26:41.543102   13659 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:26:41.543228   13659 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-200610"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-200610
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (6.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-479178 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-479178 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.736658546s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (6.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-479178
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-479178: exit status 85 (71.058012ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-079711 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-079711           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-079711           | download-only-079711 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| start   | -o=json --download-only           | download-only-200610 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-200610           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-200610           | download-only-200610 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| start   | -o=json --download-only           | download-only-479178 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-479178           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:26:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:26:44.610602   13819 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:26:44.610732   13819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:44.610742   13819 out.go:309] Setting ErrFile to fd 2...
	I0115 09:26:44.610750   13819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:44.610961   13819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 09:26:44.611511   13819 out.go:303] Setting JSON to true
	I0115 09:26:44.612293   13819 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":505,"bootTime":1705310300,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:26:44.612349   13819 start.go:138] virtualization: kvm guest
	I0115 09:26:44.614471   13819 out.go:97] [download-only-479178] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:26:44.615876   13819 out.go:169] MINIKUBE_LOCATION=17953
	I0115 09:26:44.614616   13819 notify.go:220] Checking for updates...
	I0115 09:26:44.618305   13819 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:26:44.619595   13819 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:26:44.620841   13819 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:26:44.622257   13819 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 09:26:44.624853   13819 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 09:26:44.625044   13819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:26:44.655925   13819 out.go:97] Using the kvm2 driver based on user configuration
	I0115 09:26:44.655954   13819 start.go:298] selected driver: kvm2
	I0115 09:26:44.655966   13819 start.go:902] validating driver "kvm2" against <nil>
	I0115 09:26:44.656259   13819 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:26:44.656322   13819 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17953-4821/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 09:26:44.669746   13819 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 09:26:44.669785   13819 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:26:44.670266   13819 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0115 09:26:44.670569   13819 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 09:26:44.670644   13819 cni.go:84] Creating CNI manager for ""
	I0115 09:26:44.670662   13819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0115 09:26:44.670674   13819 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 09:26:44.670687   13819 start_flags.go:321] config:
	{Name:download-only-479178 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-479178 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:26:44.670881   13819 iso.go:125] acquiring lock: {Name:mk880a4c3f7bf7750326e00badbd880e6c6a3b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:26:44.672567   13819 out.go:97] Starting control plane node download-only-479178 in cluster download-only-479178
	I0115 09:26:44.672579   13819 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 09:26:44.702380   13819 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0115 09:26:44.702408   13819 cache.go:56] Caching tarball of preloaded images
	I0115 09:26:44.702558   13819 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 09:26:44.704282   13819 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0115 09:26:44.704299   13819 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:44.738066   13819 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0115 09:26:47.588072   13819 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:47.588163   13819 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17953-4821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:48.399233   13819 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0115 09:26:48.399547   13819 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/download-only-479178/config.json ...
	I0115 09:26:48.399573   13819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/download-only-479178/config.json: {Name:mk43555f4287746c2488ac2336f0d93fbea9a5f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:26:48.399708   13819 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 09:26:48.399868   13819 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17953-4821/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-479178"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-479178
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-365521 --alsologtostderr --binary-mirror http://127.0.0.1:38145 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-365521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-365521
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (104.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-592715 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-592715 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m43.849401236s)
helpers_test.go:175: Cleaning up "offline-crio-592715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-592715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-592715: (1.045023945s)
--- PASS: TestOffline (104.89s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-732359
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-732359: exit status 85 (64.532031ms)

                                                
                                                
-- stdout --
	* Profile "addons-732359" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-732359"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-732359
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-732359: exit status 85 (60.126576ms)

                                                
                                                
-- stdout --
	* Profile "addons-732359" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-732359"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (148.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-732359 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-732359 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.795295388s)
--- PASS: TestAddons/Setup (148.80s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 29.469329ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-k5ln6" [45857e37-425a-4aaf-8eff-8045af09133f] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006071421s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mm4zk" [a7a11774-4ce1-44cc-8c52-48e10c08ab41] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00680888s
addons_test.go:340: (dbg) Run:  kubectl --context addons-732359 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-732359 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-732359 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.935051269s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 ip
2024/01/15 09:29:35 [DEBUG] GET http://192.168.39.21:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.38s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-q9rvd" [e26314ef-e869-440c-813c-ed23e94d0b6f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00741562s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-732359
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-732359: (6.374482715s)
--- PASS: TestAddons/parallel/InspektorGadget (12.38s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.788726ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-27qc5" [1ecc618d-a070-4472-8f68-a2c66a387805] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005077662s
addons_test.go:415: (dbg) Run:  kubectl --context addons-732359 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-732359 addons disable metrics-server --alsologtostderr -v=1: (1.237765111s)
--- PASS: TestAddons/parallel/MetricsServer (6.31s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.83s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 8.815739ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-vhknn" [77b631a8-d1fb-4ad4-82e3-60df11d8591c] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.016015663s
addons_test.go:473: (dbg) Run:  kubectl --context addons-732359 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-732359 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.129634162s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (78.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 29.967494ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-732359 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-732359 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [52cd2c8f-4f88-4715-8ad2-3bc7f28f7ccb] Pending
helpers_test.go:344: "task-pv-pod" [52cd2c8f-4f88-4715-8ad2-3bc7f28f7ccb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [52cd2c8f-4f88-4715-8ad2-3bc7f28f7ccb] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.005613834s
addons_test.go:584: (dbg) Run:  kubectl --context addons-732359 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-732359 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-732359 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-732359 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-732359 delete pod task-pv-pod: (1.059433494s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-732359 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-732359 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-732359 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1e95a5f6-29da-4137-9765-8305a5c219b5] Pending
helpers_test.go:344: "task-pv-pod-restore" [1e95a5f6-29da-4137-9765-8305a5c219b5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1e95a5f6-29da-4137-9765-8305a5c219b5] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004999184s
addons_test.go:626: (dbg) Run:  kubectl --context addons-732359 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-732359 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-732359 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-732359 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.802723855s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (78.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-732359 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-732359 --alsologtostderr -v=1: (2.153696371s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-d6kzs" [5e5178d2-4c98-44f0-8e54-80b2e8dd906b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-d6kzs" [5e5178d2-4c98-44f0-8e54-80b2e8dd906b] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.005125196s
--- PASS: TestAddons/parallel/Headlamp (16.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-v2ss4" [c3a10028-48cc-49a4-b995-7aadcf286199] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004738153s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-732359
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-732359: (1.067174988s)
--- PASS: TestAddons/parallel/CloudSpanner (7.08s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-732359 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-732359 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-732359 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e6040e40-2f74-403f-a3ba-55c709e35cb3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e6040e40-2f74-403f-a3ba-55c709e35cb3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e6040e40-2f74-403f-a3ba-55c709e35cb3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004312127s
addons_test.go:891: (dbg) Run:  kubectl --context addons-732359 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 ssh "cat /opt/local-path-provisioner/pvc-b866449b-b281-439c-be7d-a58afe1f764c_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-732359 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-732359 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-732359 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tghvb" [bc860577-d720-42df-8ecd-e81df841a4d1] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00919617s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-732359
addons_test.go:955: (dbg) Done: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-732359: (1.023905293s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.03s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-hrthp" [4f25ae89-d986-4bdd-8b8b-dd221b88488d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005270317s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-732359 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-732359 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (95.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-967423 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-967423 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m33.501209092s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-967423 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-967423 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-967423 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-967423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-967423
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-967423: (1.027770594s)
--- PASS: TestCertOptions (95.04s)

                                                
                                    
x
+
TestCertExpiration (284.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-252810 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-252810 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m22.9792731s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-252810 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-252810 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (20.469713272s)
helpers_test.go:175: Cleaning up "cert-expiration-252810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-252810
--- PASS: TestCertExpiration (284.24s)

                                                
                                    
x
+
TestForceSystemdFlag (83.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-200325 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0115 10:24:12.883845   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-200325 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m21.761966745s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-200325 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-200325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-200325
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-200325: (1.050784878s)
--- PASS: TestForceSystemdFlag (83.03s)

                                                
                                    
x
+
TestForceSystemdEnv (75.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-034609 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-034609 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.431909431s)
helpers_test.go:175: Cleaning up "force-systemd-env-034609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-034609
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-034609: (1.434742954s)
--- PASS: TestForceSystemdEnv (75.87s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.39s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.39s)

                                                
                                    
x
+
TestErrorSpam/setup (46.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-639440 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-639440 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-639440 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-639440 --driver=kvm2  --container-runtime=crio: (46.364288531s)
--- PASS: TestErrorSpam/setup (46.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (2.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 stop: (2.090328527s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-639440 --log_dir /tmp/nospam-639440 stop
--- PASS: TestErrorSpam/stop (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17953-4821/.minikube/files/etc/test/nested/copy/13482/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-302200 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-302200 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m39.735614782s)
--- PASS: TestFunctional/serial/StartWithProxy (99.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-302200 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-302200 --alsologtostderr -v=8: (38.049791036s)
functional_test.go:659: soft start took 38.050522229s for "functional-302200" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-302200 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 cache add registry.k8s.io/pause:3.3: (1.019087211s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 cache add registry.k8s.io/pause:latest: (1.009864889s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-302200 /tmp/TestFunctionalserialCacheCmdcacheadd_local2545906833/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 cache add minikube-local-cache-test:functional-302200
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 cache add minikube-local-cache-test:functional-302200: (1.081700019s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 cache delete minikube-local-cache-test:functional-302200
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-302200
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (227.836053ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 kubectl -- --context functional-302200 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-302200 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-302200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-302200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.036067865s)
functional_test.go:757: restart took 31.036193458s for "functional-302200" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-302200 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 logs: (1.504655795s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 logs --file /tmp/TestFunctionalserialLogsFileCmd112944876/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 logs --file /tmp/TestFunctionalserialLogsFileCmd112944876/001/logs.txt: (1.500186627s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-302200 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-302200
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-302200: exit status 115 (296.690905ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.213:30485 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-302200 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 config get cpus: exit status 14 (73.312261ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 config get cpus: exit status 14 (55.803139ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-302200 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-302200 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20623: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.75s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-302200 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-302200 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.75201ms)

                                                
                                                
-- stdout --
	* [functional-302200] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 09:39:14.497337   20217 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:39:14.497591   20217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:39:14.497602   20217 out.go:309] Setting ErrFile to fd 2...
	I0115 09:39:14.497607   20217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:39:14.497852   20217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 09:39:14.498445   20217 out.go:303] Setting JSON to false
	I0115 09:39:14.499436   20217 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1255,"bootTime":1705310300,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:39:14.499492   20217 start.go:138] virtualization: kvm guest
	I0115 09:39:14.501376   20217 out.go:177] * [functional-302200] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:39:14.503057   20217 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:39:14.503061   20217 notify.go:220] Checking for updates...
	I0115 09:39:14.504561   20217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:39:14.506068   20217 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:39:14.507630   20217 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:39:14.509160   20217 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:39:14.510590   20217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:39:14.512402   20217 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:39:14.512872   20217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:39:14.512923   20217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:39:14.528386   20217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45335
	I0115 09:39:14.528800   20217 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:39:14.529324   20217 main.go:141] libmachine: Using API Version  1
	I0115 09:39:14.529355   20217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:39:14.529706   20217 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:39:14.529907   20217 main.go:141] libmachine: (functional-302200) Calling .DriverName
	I0115 09:39:14.530126   20217 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:39:14.530394   20217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:39:14.530447   20217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:39:14.545431   20217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0115 09:39:14.545755   20217 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:39:14.546173   20217 main.go:141] libmachine: Using API Version  1
	I0115 09:39:14.546199   20217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:39:14.546570   20217 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:39:14.546738   20217 main.go:141] libmachine: (functional-302200) Calling .DriverName
	I0115 09:39:14.581444   20217 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 09:39:14.582991   20217 start.go:298] selected driver: kvm2
	I0115 09:39:14.583009   20217 start.go:902] validating driver "kvm2" against &{Name:functional-302200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-302200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.213 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:39:14.583463   20217 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:39:14.586125   20217 out.go:177] 
	W0115 09:39:14.587566   20217 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0115 09:39:14.589040   20217 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-302200 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-302200 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-302200 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (176.197279ms)

                                                
                                                
-- stdout --
	* [functional-302200] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 09:39:14.335056   20141 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:39:14.335186   20141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:39:14.335200   20141 out.go:309] Setting ErrFile to fd 2...
	I0115 09:39:14.335207   20141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:39:14.335515   20141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 09:39:14.336076   20141 out.go:303] Setting JSON to false
	I0115 09:39:14.337038   20141 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1254,"bootTime":1705310300,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:39:14.337124   20141 start.go:138] virtualization: kvm guest
	I0115 09:39:14.339384   20141 out.go:177] * [functional-302200] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0115 09:39:14.341496   20141 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:39:14.341567   20141 notify.go:220] Checking for updates...
	I0115 09:39:14.344252   20141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:39:14.345885   20141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 09:39:14.347429   20141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 09:39:14.349054   20141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:39:14.350722   20141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:39:14.352549   20141 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:39:14.353115   20141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:39:14.353166   20141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:39:14.373247   20141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41717
	I0115 09:39:14.373901   20141 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:39:14.374563   20141 main.go:141] libmachine: Using API Version  1
	I0115 09:39:14.374586   20141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:39:14.375038   20141 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:39:14.375189   20141 main.go:141] libmachine: (functional-302200) Calling .DriverName
	I0115 09:39:14.375417   20141 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:39:14.375825   20141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:39:14.375864   20141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:39:14.396891   20141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I0115 09:39:14.397298   20141 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:39:14.397813   20141 main.go:141] libmachine: Using API Version  1
	I0115 09:39:14.397835   20141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:39:14.398286   20141 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:39:14.398482   20141 main.go:141] libmachine: (functional-302200) Calling .DriverName
	I0115 09:39:14.431480   20141 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0115 09:39:14.432787   20141 start.go:298] selected driver: kvm2
	I0115 09:39:14.432800   20141 start.go:902] validating driver "kvm2" against &{Name:functional-302200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-302200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.213 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:39:14.432881   20141 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:39:14.434937   20141 out.go:177] 
	W0115 09:39:14.436034   20141 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0115 09:39:14.437565   20141 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-302200 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-302200 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-xxbmh" [e38bc0eb-a7ab-4d93-a720-17500fd8049e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-xxbmh" [e38bc0eb-a7ab-4d93-a720-17500fd8049e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.008460037s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.213:31152
functional_test.go:1674: http://192.168.50.213:31152: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-xxbmh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.213:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.213:31152
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5319909d-aaf4-41c3-87e0-74a92302a787] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005501899s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-302200 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-302200 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-302200 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-302200 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-302200 apply -f testdata/storage-provisioner/pod.yaml
E0115 09:39:26.575308   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [07be06fb-92f2-462b-8bba-e836ebc018ff] Pending
helpers_test.go:344: "sp-pod" [07be06fb-92f2-462b-8bba-e836ebc018ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [07be06fb-92f2-462b-8bba-e836ebc018ff] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003832591s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-302200 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-302200 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-302200 delete -f testdata/storage-provisioner/pod.yaml: (1.853678751s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-302200 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3a025a4a-2ed9-4e54-9d34-5c9db05fc389] Pending
helpers_test.go:344: "sp-pod" [3a025a4a-2ed9-4e54-9d34-5c9db05fc389] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3a025a4a-2ed9-4e54-9d34-5c9db05fc389] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004024431s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-302200 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh -n functional-302200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 cp functional-302200:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4106439596/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh -n functional-302200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh -n functional-302200 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-302200 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-6b4z7" [4883d4f2-5cf7-4af8-b2e8-2b5ea0ae024b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0115 09:39:31.696232   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
helpers_test.go:344: "mysql-859648c796-6b4z7" [4883d4f2-5cf7-4af8-b2e8-2b5ea0ae024b] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.005579959s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-302200 exec mysql-859648c796-6b4z7 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-302200 exec mysql-859648c796-6b4z7 -- mysql -ppassword -e "show databases;": exit status 1 (179.606457ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-302200 exec mysql-859648c796-6b4z7 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-302200 exec mysql-859648c796-6b4z7 -- mysql -ppassword -e "show databases;": exit status 1 (186.536135ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-302200 exec mysql-859648c796-6b4z7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/13482/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo cat /etc/test/nested/copy/13482/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/13482.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo cat /etc/ssl/certs/13482.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/13482.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo cat /usr/share/ca-certificates/13482.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/134822.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo cat /etc/ssl/certs/134822.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/134822.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo cat /usr/share/ca-certificates/134822.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-302200 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 ssh "sudo systemctl is-active docker": exit status 1 (232.475124ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 ssh "sudo systemctl is-active containerd": exit status 1 (234.513286ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-302200 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-302200 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-7brw6" [0c89cfd6-92c6-47f9-84d6-5f95c195af46] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-7brw6" [0c89cfd6-92c6-47f9-84d6-5f95c195af46] Running
E0115 09:39:21.453065   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:39:21.458980   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:39:21.469212   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:39:21.489987   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:39:21.530230   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:39:21.611011   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.152304532s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdany-port3282369466/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705311553116233372" to /tmp/TestFunctionalparallelMountCmdany-port3282369466/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705311553116233372" to /tmp/TestFunctionalparallelMountCmdany-port3282369466/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705311553116233372" to /tmp/TestFunctionalparallelMountCmdany-port3282369466/001/test-1705311553116233372
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.477417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 15 09:39 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 15 09:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 15 09:39 test-1705311553116233372
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh cat /mount-9p/test-1705311553116233372
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-302200 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [114460cb-7ee3-459d-99b0-942ece4ac59f] Pending
helpers_test.go:344: "busybox-mount" [114460cb-7ee3-459d-99b0-942ece4ac59f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [114460cb-7ee3-459d-99b0-942ece4ac59f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0115 09:39:21.771661   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:39:22.092539   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:39:22.733580   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [114460cb-7ee3-459d-99b0-942ece4ac59f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.00509179s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-302200 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdany-port3282369466/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "266.975655ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "62.56482ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "235.227134ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "63.24689ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdspecific-port1746654688/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T /mount-9p | grep 9p"
E0115 09:39:24.014326   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.072437ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdspecific-port1746654688/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 ssh "sudo umount -f /mount-9p": exit status 1 (270.451072ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-302200 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdspecific-port1746654688/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 service list -o json
functional_test.go:1493: Took "525.169731ms" to run "out/minikube-linux-amd64 -p functional-302200 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1448410761/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1448410761/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1448410761/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T" /mount1: exit status 1 (303.90099ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-302200 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1448410761/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1448410761/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-302200 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1448410761/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.213:32137
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.213:32137
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-302200 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-302200
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-302200
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-302200 image ls --format short --alsologtostderr:
I0115 09:39:57.355563   22056 out.go:296] Setting OutFile to fd 1 ...
I0115 09:39:57.355762   22056 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:57.355788   22056 out.go:309] Setting ErrFile to fd 2...
I0115 09:39:57.355802   22056 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:57.356044   22056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
I0115 09:39:57.356680   22056 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:57.356782   22056 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:57.357141   22056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:57.357197   22056 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:57.372998   22056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
I0115 09:39:57.373394   22056 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:57.374001   22056 main.go:141] libmachine: Using API Version  1
I0115 09:39:57.374033   22056 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:57.374395   22056 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:57.374576   22056 main.go:141] libmachine: (functional-302200) Calling .GetState
I0115 09:39:57.376492   22056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:57.376532   22056 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:57.389621   22056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
I0115 09:39:57.389985   22056 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:57.390467   22056 main.go:141] libmachine: Using API Version  1
I0115 09:39:57.390489   22056 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:57.390837   22056 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:57.391038   22056 main.go:141] libmachine: (functional-302200) Calling .DriverName
I0115 09:39:57.391236   22056 ssh_runner.go:195] Run: systemctl --version
I0115 09:39:57.391262   22056 main.go:141] libmachine: (functional-302200) Calling .GetSSHHostname
I0115 09:39:57.393925   22056 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:57.394236   22056 main.go:141] libmachine: (functional-302200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:a4:c3", ip: ""} in network mk-functional-302200: {Iface:virbr1 ExpiryTime:2024-01-15 10:36:25 +0000 UTC Type:0 Mac:52:54:00:5a:a4:c3 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:functional-302200 Clientid:01:52:54:00:5a:a4:c3}
I0115 09:39:57.394270   22056 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined IP address 192.168.50.213 and MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:57.394398   22056 main.go:141] libmachine: (functional-302200) Calling .GetSSHPort
I0115 09:39:57.394566   22056 main.go:141] libmachine: (functional-302200) Calling .GetSSHKeyPath
I0115 09:39:57.394730   22056 main.go:141] libmachine: (functional-302200) Calling .GetSSHUsername
I0115 09:39:57.394855   22056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/functional-302200/id_rsa Username:docker}
I0115 09:39:57.485573   22056 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 09:39:57.566552   22056 main.go:141] libmachine: Making call to close driver server
I0115 09:39:57.566570   22056 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:39:57.566844   22056 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:39:57.566859   22056 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 09:39:57.566874   22056 main.go:141] libmachine: Making call to close driver server
I0115 09:39:57.566883   22056 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:39:57.567093   22056 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:39:57.567113   22056 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 09:39:57.567112   22056 main.go:141] libmachine: (functional-302200) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-302200 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | a8758716bb6aa | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-302200  | 0a8effd667411 | 3.35kB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-302200  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-302200 image ls --format table --alsologtostderr:
I0115 09:39:57.675807   22113 out.go:296] Setting OutFile to fd 1 ...
I0115 09:39:57.675929   22113 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:57.675940   22113 out.go:309] Setting ErrFile to fd 2...
I0115 09:39:57.675947   22113 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:57.676226   22113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
I0115 09:39:57.677033   22113 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:57.677179   22113 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:57.677762   22113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:57.677826   22113 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:57.692329   22113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
I0115 09:39:57.692683   22113 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:57.693233   22113 main.go:141] libmachine: Using API Version  1
I0115 09:39:57.693261   22113 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:57.693613   22113 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:57.693787   22113 main.go:141] libmachine: (functional-302200) Calling .GetState
I0115 09:39:57.695304   22113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:57.695343   22113 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:57.708246   22113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
I0115 09:39:57.708549   22113 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:57.708983   22113 main.go:141] libmachine: Using API Version  1
I0115 09:39:57.709010   22113 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:57.709317   22113 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:57.709524   22113 main.go:141] libmachine: (functional-302200) Calling .DriverName
I0115 09:39:57.709726   22113 ssh_runner.go:195] Run: systemctl --version
I0115 09:39:57.709751   22113 main.go:141] libmachine: (functional-302200) Calling .GetSSHHostname
I0115 09:39:57.712347   22113 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:57.712811   22113 main.go:141] libmachine: (functional-302200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:a4:c3", ip: ""} in network mk-functional-302200: {Iface:virbr1 ExpiryTime:2024-01-15 10:36:25 +0000 UTC Type:0 Mac:52:54:00:5a:a4:c3 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:functional-302200 Clientid:01:52:54:00:5a:a4:c3}
I0115 09:39:57.712837   22113 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined IP address 192.168.50.213 and MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:57.712993   22113 main.go:141] libmachine: (functional-302200) Calling .GetSSHPort
I0115 09:39:57.713155   22113 main.go:141] libmachine: (functional-302200) Calling .GetSSHKeyPath
I0115 09:39:57.713308   22113 main.go:141] libmachine: (functional-302200) Calling .GetSSHUsername
I0115 09:39:57.713419   22113 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/functional-302200/id_rsa Username:docker}
I0115 09:39:57.821485   22113 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 09:39:57.893562   22113 main.go:141] libmachine: Making call to close driver server
I0115 09:39:57.893582   22113 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:39:57.893844   22113 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:39:57.893861   22113 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 09:39:57.893876   22113 main.go:141] libmachine: Making call to close driver server
I0115 09:39:57.893885   22113 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:39:57.893902   22113 main.go:141] libmachine: (functional-302200) DBG | Closing plugin on server side
I0115 09:39:57.894126   22113 main.go:141] libmachine: (functional-302200) DBG | Closing plugin on server side
I0115 09:39:57.894163   22113 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:39:57.894193   22113 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-302200 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff144
24cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],
"size":"31470524"},{"id":"0a8effd6674111a23c081f58d5bc1f3c285204c6f222cbb22c3a802d3c02b43a","repoDigests":["localhost/minikube-local-cache-test@sha256:9ac5b625650ea5104d6813321e8f90db786e2e08c853e3cdbb6c93017f2e7206"],"repoTags":["localhost/minikube-local-cache-test:functional-302200"],"size":"3345"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/k
ube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"siz
e":"65258016"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-302200"],"size":"34114467"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4b
c6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c","docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDi
gests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-302200 image ls --format json --alsologtostderr:
I0115 09:39:57.640720   22103 out.go:296] Setting OutFile to fd 1 ...
I0115 09:39:57.640851   22103 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:57.640892   22103 out.go:309] Setting ErrFile to fd 2...
I0115 09:39:57.640909   22103 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:57.641111   22103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
I0115 09:39:57.641742   22103 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:57.641928   22103 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:57.642409   22103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:57.642499   22103 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:57.658993   22103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43435
I0115 09:39:57.659561   22103 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:57.660253   22103 main.go:141] libmachine: Using API Version  1
I0115 09:39:57.660281   22103 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:57.660658   22103 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:57.660829   22103 main.go:141] libmachine: (functional-302200) Calling .GetState
I0115 09:39:57.662776   22103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:57.662810   22103 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:57.682090   22103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36025
I0115 09:39:57.682516   22103 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:57.683001   22103 main.go:141] libmachine: Using API Version  1
I0115 09:39:57.683030   22103 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:57.683403   22103 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:57.683577   22103 main.go:141] libmachine: (functional-302200) Calling .DriverName
I0115 09:39:57.683756   22103 ssh_runner.go:195] Run: systemctl --version
I0115 09:39:57.683784   22103 main.go:141] libmachine: (functional-302200) Calling .GetSSHHostname
I0115 09:39:57.686791   22103 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:57.687221   22103 main.go:141] libmachine: (functional-302200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:a4:c3", ip: ""} in network mk-functional-302200: {Iface:virbr1 ExpiryTime:2024-01-15 10:36:25 +0000 UTC Type:0 Mac:52:54:00:5a:a4:c3 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:functional-302200 Clientid:01:52:54:00:5a:a4:c3}
I0115 09:39:57.687258   22103 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined IP address 192.168.50.213 and MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:57.687359   22103 main.go:141] libmachine: (functional-302200) Calling .GetSSHPort
I0115 09:39:57.687511   22103 main.go:141] libmachine: (functional-302200) Calling .GetSSHKeyPath
I0115 09:39:57.687739   22103 main.go:141] libmachine: (functional-302200) Calling .GetSSHUsername
I0115 09:39:57.687874   22103 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/functional-302200/id_rsa Username:docker}
I0115 09:39:57.789421   22103 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 09:39:57.857267   22103 main.go:141] libmachine: Making call to close driver server
I0115 09:39:57.857283   22103 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:39:57.857567   22103 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:39:57.857591   22103 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 09:39:57.857608   22103 main.go:141] libmachine: Making call to close driver server
I0115 09:39:57.857618   22103 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:39:57.857906   22103 main.go:141] libmachine: (functional-302200) DBG | Closing plugin on server side
I0115 09:39:57.857963   22103 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:39:57.857986   22103 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-302200 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0a8effd6674111a23c081f58d5bc1f3c285204c6f222cbb22c3a802d3c02b43a
repoDigests:
- localhost/minikube-local-cache-test@sha256:9ac5b625650ea5104d6813321e8f90db786e2e08c853e3cdbb6c93017f2e7206
repoTags:
- localhost/minikube-local-cache-test:functional-302200
size: "3345"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-302200
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-302200 image ls --format yaml --alsologtostderr:
I0115 09:39:57.356565   22057 out.go:296] Setting OutFile to fd 1 ...
I0115 09:39:57.356646   22057 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:57.356654   22057 out.go:309] Setting ErrFile to fd 2...
I0115 09:39:57.356659   22057 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:57.356850   22057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
I0115 09:39:57.357333   22057 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:57.357423   22057 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:57.357797   22057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:57.357834   22057 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:57.371436   22057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
I0115 09:39:57.371915   22057 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:57.372455   22057 main.go:141] libmachine: Using API Version  1
I0115 09:39:57.372484   22057 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:57.372826   22057 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:57.373032   22057 main.go:141] libmachine: (functional-302200) Calling .GetState
I0115 09:39:57.375132   22057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:57.375183   22057 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:57.388134   22057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
I0115 09:39:57.388528   22057 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:57.388946   22057 main.go:141] libmachine: Using API Version  1
I0115 09:39:57.388972   22057 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:57.389251   22057 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:57.389420   22057 main.go:141] libmachine: (functional-302200) Calling .DriverName
I0115 09:39:57.389638   22057 ssh_runner.go:195] Run: systemctl --version
I0115 09:39:57.389663   22057 main.go:141] libmachine: (functional-302200) Calling .GetSSHHostname
I0115 09:39:57.392180   22057 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:57.392509   22057 main.go:141] libmachine: (functional-302200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:a4:c3", ip: ""} in network mk-functional-302200: {Iface:virbr1 ExpiryTime:2024-01-15 10:36:25 +0000 UTC Type:0 Mac:52:54:00:5a:a4:c3 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:functional-302200 Clientid:01:52:54:00:5a:a4:c3}
I0115 09:39:57.392537   22057 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined IP address 192.168.50.213 and MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:57.392771   22057 main.go:141] libmachine: (functional-302200) Calling .GetSSHPort
I0115 09:39:57.392935   22057 main.go:141] libmachine: (functional-302200) Calling .GetSSHKeyPath
I0115 09:39:57.393087   22057 main.go:141] libmachine: (functional-302200) Calling .GetSSHUsername
I0115 09:39:57.393316   22057 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/functional-302200/id_rsa Username:docker}
I0115 09:39:57.495323   22057 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 09:39:57.604585   22057 main.go:141] libmachine: Making call to close driver server
I0115 09:39:57.604603   22057 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:39:57.604877   22057 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:39:57.604942   22057 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 09:39:57.604972   22057 main.go:141] libmachine: Making call to close driver server
I0115 09:39:57.604975   22057 main.go:141] libmachine: (functional-302200) DBG | Closing plugin on server side
I0115 09:39:57.604987   22057 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:39:57.605225   22057 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:39:57.605249   22057 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-302200 ssh pgrep buildkitd: exit status 1 (216.256402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image build -t localhost/my-image:functional-302200 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 image build -t localhost/my-image:functional-302200 testdata/build --alsologtostderr: (2.264674012s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-302200 image build -t localhost/my-image:functional-302200 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5b0618328a9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-302200
--> 5b9cb2a7816
Successfully tagged localhost/my-image:functional-302200
5b9cb2a7816a8de977c3025ec210bf469fd50cba4d3d1416c57077f6bcd6e9bf
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-302200 image build -t localhost/my-image:functional-302200 testdata/build --alsologtostderr:
I0115 09:39:58.140691   22179 out.go:296] Setting OutFile to fd 1 ...
I0115 09:39:58.140859   22179 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:58.140871   22179 out.go:309] Setting ErrFile to fd 2...
I0115 09:39:58.140878   22179 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:39:58.141174   22179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
I0115 09:39:58.141937   22179 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:58.142459   22179 config.go:182] Loaded profile config "functional-302200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:39:58.142881   22179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:58.142937   22179 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:58.156525   22179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38057
I0115 09:39:58.156975   22179 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:58.157588   22179 main.go:141] libmachine: Using API Version  1
I0115 09:39:58.157612   22179 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:58.157966   22179 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:58.158183   22179 main.go:141] libmachine: (functional-302200) Calling .GetState
I0115 09:39:58.160104   22179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0115 09:39:58.160152   22179 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 09:39:58.173502   22179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
I0115 09:39:58.173829   22179 main.go:141] libmachine: () Calling .GetVersion
I0115 09:39:58.174232   22179 main.go:141] libmachine: Using API Version  1
I0115 09:39:58.174256   22179 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 09:39:58.174584   22179 main.go:141] libmachine: () Calling .GetMachineName
I0115 09:39:58.174746   22179 main.go:141] libmachine: (functional-302200) Calling .DriverName
I0115 09:39:58.174942   22179 ssh_runner.go:195] Run: systemctl --version
I0115 09:39:58.174967   22179 main.go:141] libmachine: (functional-302200) Calling .GetSSHHostname
I0115 09:39:58.177690   22179 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:58.178074   22179 main.go:141] libmachine: (functional-302200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:a4:c3", ip: ""} in network mk-functional-302200: {Iface:virbr1 ExpiryTime:2024-01-15 10:36:25 +0000 UTC Type:0 Mac:52:54:00:5a:a4:c3 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:functional-302200 Clientid:01:52:54:00:5a:a4:c3}
I0115 09:39:58.178104   22179 main.go:141] libmachine: (functional-302200) DBG | domain functional-302200 has defined IP address 192.168.50.213 and MAC address 52:54:00:5a:a4:c3 in network mk-functional-302200
I0115 09:39:58.178244   22179 main.go:141] libmachine: (functional-302200) Calling .GetSSHPort
I0115 09:39:58.178397   22179 main.go:141] libmachine: (functional-302200) Calling .GetSSHKeyPath
I0115 09:39:58.178556   22179 main.go:141] libmachine: (functional-302200) Calling .GetSSHUsername
I0115 09:39:58.178696   22179 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/functional-302200/id_rsa Username:docker}
I0115 09:39:58.269902   22179 build_images.go:151] Building image from path: /tmp/build.2171498043.tar
I0115 09:39:58.269968   22179 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0115 09:39:58.281212   22179 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2171498043.tar
I0115 09:39:58.285306   22179 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2171498043.tar: stat -c "%s %y" /var/lib/minikube/build/build.2171498043.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2171498043.tar': No such file or directory
I0115 09:39:58.285330   22179 ssh_runner.go:362] scp /tmp/build.2171498043.tar --> /var/lib/minikube/build/build.2171498043.tar (3072 bytes)
I0115 09:39:58.311111   22179 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2171498043
I0115 09:39:58.320555   22179 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2171498043 -xf /var/lib/minikube/build/build.2171498043.tar
I0115 09:39:58.338516   22179 crio.go:297] Building image: /var/lib/minikube/build/build.2171498043
I0115 09:39:58.338561   22179 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-302200 /var/lib/minikube/build/build.2171498043 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0115 09:40:00.320253   22179 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-302200 /var/lib/minikube/build/build.2171498043 --cgroup-manager=cgroupfs: (1.981669565s)
I0115 09:40:00.320315   22179 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2171498043
I0115 09:40:00.333026   22179 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2171498043.tar
I0115 09:40:00.341146   22179 build_images.go:207] Built localhost/my-image:functional-302200 from /tmp/build.2171498043.tar
I0115 09:40:00.341175   22179 build_images.go:123] succeeded building to: functional-302200
I0115 09:40:00.341181   22179 build_images.go:124] failed building to: 
I0115 09:40:00.341209   22179 main.go:141] libmachine: Making call to close driver server
I0115 09:40:00.341226   22179 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:40:00.341478   22179 main.go:141] libmachine: (functional-302200) DBG | Closing plugin on server side
I0115 09:40:00.341495   22179 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:40:00.341510   22179 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 09:40:00.341521   22179 main.go:141] libmachine: Making call to close driver server
I0115 09:40:00.341534   22179 main.go:141] libmachine: (functional-302200) Calling .Close
I0115 09:40:00.341761   22179 main.go:141] libmachine: Successfully made call to close driver server
I0115 09:40:00.341776   22179 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 09:40:00.341817   22179 main.go:141] libmachine: (functional-302200) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-302200
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (8.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image load --daemon gcr.io/google-containers/addon-resizer:functional-302200 --alsologtostderr
2024/01/15 09:39:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 image load --daemon gcr.io/google-containers/addon-resizer:functional-302200 --alsologtostderr: (8.201184141s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (8.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image load --daemon gcr.io/google-containers/addon-resizer:functional-302200 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 image load --daemon gcr.io/google-containers/addon-resizer:functional-302200 --alsologtostderr: (2.618158893s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-302200
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image load --daemon gcr.io/google-containers/addon-resizer:functional-302200 --alsologtostderr
E0115 09:39:41.936660   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 image load --daemon gcr.io/google-containers/addon-resizer:functional-302200 --alsologtostderr: (10.698602145s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image save gcr.io/google-containers/addon-resizer:functional-302200 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 image save gcr.io/google-containers/addon-resizer:functional-302200 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.986940132s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image rm gcr.io/google-containers/addon-resizer:functional-302200 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.43417593s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-302200
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-302200 image save --daemon gcr.io/google-containers/addon-resizer:functional-302200 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-302200 image save --daemon gcr.io/google-containers/addon-resizer:functional-302200 --alsologtostderr: (1.287967371s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-302200
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-302200
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-302200
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-302200
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (82.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-799339 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0115 09:40:43.377338   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-799339 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.887386438s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (82.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-799339 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-799339 addons enable ingress --alsologtostderr -v=5: (12.941001582s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-799339 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-870973 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0115 09:44:49.139816   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:44:53.846390   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:45:34.807763   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-870973 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.696641188s)
--- PASS: TestJSONOutput/start/Command (100.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-870973 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-870973 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-870973 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-870973 --output=json --user=testUser: (7.09844346s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-983951 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-983951 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.435096ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"77e1fbef-d303-4bb5-90ba-3c3528ba05c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-983951] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"23eeadd8-c54e-43f8-b683-d8c6fb8b757a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17953"}}
	{"specversion":"1.0","id":"78064640-72b3-4a70-b26e-9d0022abd443","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ca5ed067-fba4-420d-94d0-b607a5f32ead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig"}}
	{"specversion":"1.0","id":"3d360f57-293f-4b22-bf2f-8c9112a42d72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube"}}
	{"specversion":"1.0","id":"c9d87ac2-edab-4fd5-874b-2b76b4d5e7a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"792c8f87-6ac1-49bc-b9fc-a5264882953c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4438bea7-50ff-4a73-af58-b919702c6294","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-983951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-983951
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (100.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-616113 --driver=kvm2  --container-runtime=crio
E0115 09:46:39.520364   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:39.525664   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:39.535916   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:39.556225   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:39.596527   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:39.676813   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:39.837260   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:40.157927   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:40.798849   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:42.079306   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:44.641207   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:49.762239   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:46:56.730920   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:47:00.002578   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-616113 --driver=kvm2  --container-runtime=crio: (49.41726851s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-618729 --driver=kvm2  --container-runtime=crio
E0115 09:47:20.483629   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:48:01.444980   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-618729 --driver=kvm2  --container-runtime=crio: (48.290525855s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-616113
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-618729
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-618729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-618729
helpers_test.go:175: Cleaning up "first-616113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-616113
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-616113: (1.024734435s)
--- PASS: TestMinikubeProfile (100.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-713722 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-713722 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.848775466s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-713722 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-713722 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-731501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-731501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.200188331s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-731501 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-731501 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-713722 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-731501 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-731501 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.14s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-731501
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-731501: (1.13946568s)
--- PASS: TestMountStart/serial/Stop (1.14s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-731501
E0115 09:49:12.883679   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 09:49:21.454213   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 09:49:23.365560   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-731501: (22.056127786s)
--- PASS: TestMountStart/serial/RestartStopped (23.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-731501 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-731501 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-975382 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0115 09:49:40.571148   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-975382 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m47.978252738s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-975382 -- rollout status deployment/busybox: (2.670356953s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-h2lk5 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-pwx96 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-h2lk5 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-pwx96 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-h2lk5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-975382 -- exec busybox-5bc68d56bd-pwx96 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.45s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-975382 -v 3 --alsologtostderr
E0115 09:51:39.519899   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 09:52:07.207497   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-975382 -v 3 --alsologtostderr: (42.410650271s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.99s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-975382 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp testdata/cp-test.txt multinode-975382:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp multinode-975382:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1127644128/001/cp-test_multinode-975382.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp multinode-975382:/home/docker/cp-test.txt multinode-975382-m02:/home/docker/cp-test_multinode-975382_multinode-975382-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m02 "sudo cat /home/docker/cp-test_multinode-975382_multinode-975382-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp multinode-975382:/home/docker/cp-test.txt multinode-975382-m03:/home/docker/cp-test_multinode-975382_multinode-975382-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m03 "sudo cat /home/docker/cp-test_multinode-975382_multinode-975382-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp testdata/cp-test.txt multinode-975382-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp multinode-975382-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1127644128/001/cp-test_multinode-975382-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp multinode-975382-m02:/home/docker/cp-test.txt multinode-975382:/home/docker/cp-test_multinode-975382-m02_multinode-975382.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382 "sudo cat /home/docker/cp-test_multinode-975382-m02_multinode-975382.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp multinode-975382-m02:/home/docker/cp-test.txt multinode-975382-m03:/home/docker/cp-test_multinode-975382-m02_multinode-975382-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m03 "sudo cat /home/docker/cp-test_multinode-975382-m02_multinode-975382-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp testdata/cp-test.txt multinode-975382-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp multinode-975382-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1127644128/001/cp-test_multinode-975382-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp multinode-975382-m03:/home/docker/cp-test.txt multinode-975382:/home/docker/cp-test_multinode-975382-m03_multinode-975382.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382 "sudo cat /home/docker/cp-test_multinode-975382-m03_multinode-975382.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 cp multinode-975382-m03:/home/docker/cp-test.txt multinode-975382-m02:/home/docker/cp-test_multinode-975382-m03_multinode-975382-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 ssh -n multinode-975382-m02 "sudo cat /home/docker/cp-test_multinode-975382-m03_multinode-975382-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-975382 node stop m03: (2.092172333s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-975382 status: exit status 7 (441.999118ms)

                                                
                                                
-- stdout --
	multinode-975382
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-975382-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-975382-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-975382 status --alsologtostderr: exit status 7 (423.677975ms)

                                                
                                                
-- stdout --
	multinode-975382
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-975382-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-975382-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 09:52:20.675799   29010 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:52:20.676055   29010 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:52:20.676064   29010 out.go:309] Setting ErrFile to fd 2...
	I0115 09:52:20.676071   29010 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:52:20.676248   29010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 09:52:20.676434   29010 out.go:303] Setting JSON to false
	I0115 09:52:20.676476   29010 mustload.go:65] Loading cluster: multinode-975382
	I0115 09:52:20.676593   29010 notify.go:220] Checking for updates...
	I0115 09:52:20.676876   29010 config.go:182] Loaded profile config "multinode-975382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:52:20.676892   29010 status.go:255] checking status of multinode-975382 ...
	I0115 09:52:20.677302   29010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:52:20.677372   29010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:52:20.694552   29010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37391
	I0115 09:52:20.694963   29010 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:52:20.695455   29010 main.go:141] libmachine: Using API Version  1
	I0115 09:52:20.695478   29010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:52:20.695848   29010 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:52:20.696058   29010 main.go:141] libmachine: (multinode-975382) Calling .GetState
	I0115 09:52:20.697638   29010 status.go:330] multinode-975382 host status = "Running" (err=<nil>)
	I0115 09:52:20.697653   29010 host.go:66] Checking if "multinode-975382" exists ...
	I0115 09:52:20.697927   29010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:52:20.697992   29010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:52:20.712227   29010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33129
	I0115 09:52:20.712631   29010 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:52:20.713175   29010 main.go:141] libmachine: Using API Version  1
	I0115 09:52:20.713199   29010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:52:20.713506   29010 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:52:20.713724   29010 main.go:141] libmachine: (multinode-975382) Calling .GetIP
	I0115 09:52:20.716522   29010 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:52:20.716962   29010 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:52:20.716993   29010 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:52:20.717095   29010 host.go:66] Checking if "multinode-975382" exists ...
	I0115 09:52:20.717358   29010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:52:20.717387   29010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:52:20.731258   29010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0115 09:52:20.731648   29010 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:52:20.732071   29010 main.go:141] libmachine: Using API Version  1
	I0115 09:52:20.732087   29010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:52:20.732366   29010 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:52:20.732557   29010 main.go:141] libmachine: (multinode-975382) Calling .DriverName
	I0115 09:52:20.732726   29010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 09:52:20.732762   29010 main.go:141] libmachine: (multinode-975382) Calling .GetSSHHostname
	I0115 09:52:20.735052   29010 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:52:20.735478   29010 main.go:141] libmachine: (multinode-975382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:66:0a", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:49:47 +0000 UTC Type:0 Mac:52:54:00:39:66:0a Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-975382 Clientid:01:52:54:00:39:66:0a}
	I0115 09:52:20.735508   29010 main.go:141] libmachine: (multinode-975382) DBG | domain multinode-975382 has defined IP address 192.168.39.217 and MAC address 52:54:00:39:66:0a in network mk-multinode-975382
	I0115 09:52:20.735635   29010 main.go:141] libmachine: (multinode-975382) Calling .GetSSHPort
	I0115 09:52:20.735806   29010 main.go:141] libmachine: (multinode-975382) Calling .GetSSHKeyPath
	I0115 09:52:20.735958   29010 main.go:141] libmachine: (multinode-975382) Calling .GetSSHUsername
	I0115 09:52:20.736097   29010 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382/id_rsa Username:docker}
	I0115 09:52:20.825553   29010 ssh_runner.go:195] Run: systemctl --version
	I0115 09:52:20.831214   29010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:52:20.843536   29010 kubeconfig.go:92] found "multinode-975382" server: "https://192.168.39.217:8443"
	I0115 09:52:20.843558   29010 api_server.go:166] Checking apiserver status ...
	I0115 09:52:20.843594   29010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 09:52:20.855071   29010 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	I0115 09:52:20.863807   29010 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod638704967c86b61fc474d50d411fc862/crio-8e218b531ed430d4ceaa06c77c3582eaa49e66ae254986ec4d90b8f7c5585648"
	I0115 09:52:20.863873   29010 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod638704967c86b61fc474d50d411fc862/crio-8e218b531ed430d4ceaa06c77c3582eaa49e66ae254986ec4d90b8f7c5585648/freezer.state
	I0115 09:52:20.872679   29010 api_server.go:204] freezer state: "THAWED"
	I0115 09:52:20.872716   29010 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0115 09:52:20.878670   29010 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0115 09:52:20.878689   29010 status.go:421] multinode-975382 apiserver status = Running (err=<nil>)
	I0115 09:52:20.878700   29010 status.go:257] multinode-975382 status: &{Name:multinode-975382 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 09:52:20.878715   29010 status.go:255] checking status of multinode-975382-m02 ...
	I0115 09:52:20.878991   29010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:52:20.879028   29010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:52:20.893086   29010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0115 09:52:20.893454   29010 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:52:20.893867   29010 main.go:141] libmachine: Using API Version  1
	I0115 09:52:20.893890   29010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:52:20.894171   29010 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:52:20.894308   29010 main.go:141] libmachine: (multinode-975382-m02) Calling .GetState
	I0115 09:52:20.895880   29010 status.go:330] multinode-975382-m02 host status = "Running" (err=<nil>)
	I0115 09:52:20.895896   29010 host.go:66] Checking if "multinode-975382-m02" exists ...
	I0115 09:52:20.896180   29010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:52:20.896223   29010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:52:20.910467   29010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0115 09:52:20.910890   29010 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:52:20.911370   29010 main.go:141] libmachine: Using API Version  1
	I0115 09:52:20.911396   29010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:52:20.911743   29010 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:52:20.911940   29010 main.go:141] libmachine: (multinode-975382-m02) Calling .GetIP
	I0115 09:52:20.914776   29010 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:52:20.915124   29010 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:52:20.915154   29010 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:52:20.915360   29010 host.go:66] Checking if "multinode-975382-m02" exists ...
	I0115 09:52:20.915699   29010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:52:20.915743   29010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:52:20.929508   29010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I0115 09:52:20.929826   29010 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:52:20.930325   29010 main.go:141] libmachine: Using API Version  1
	I0115 09:52:20.930353   29010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:52:20.930645   29010 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:52:20.930798   29010 main.go:141] libmachine: (multinode-975382-m02) Calling .DriverName
	I0115 09:52:20.930973   29010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 09:52:20.930995   29010 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHHostname
	I0115 09:52:20.933381   29010 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:52:20.933704   29010 main.go:141] libmachine: (multinode-975382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:55:d5", ip: ""} in network mk-multinode-975382: {Iface:virbr1 ExpiryTime:2024-01-15 10:50:53 +0000 UTC Type:0 Mac:52:54:00:e1:55:d5 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-975382-m02 Clientid:01:52:54:00:e1:55:d5}
	I0115 09:52:20.933731   29010 main.go:141] libmachine: (multinode-975382-m02) DBG | domain multinode-975382-m02 has defined IP address 192.168.39.95 and MAC address 52:54:00:e1:55:d5 in network mk-multinode-975382
	I0115 09:52:20.933831   29010 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHPort
	I0115 09:52:20.933990   29010 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHKeyPath
	I0115 09:52:20.934137   29010 main.go:141] libmachine: (multinode-975382-m02) Calling .GetSSHUsername
	I0115 09:52:20.934248   29010 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17953-4821/.minikube/machines/multinode-975382-m02/id_rsa Username:docker}
	I0115 09:52:21.013392   29010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:52:21.025603   29010 status.go:257] multinode-975382-m02 status: &{Name:multinode-975382-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0115 09:52:21.025634   29010 status.go:255] checking status of multinode-975382-m03 ...
	I0115 09:52:21.025944   29010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0115 09:52:21.025987   29010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 09:52:21.041705   29010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
	I0115 09:52:21.042092   29010 main.go:141] libmachine: () Calling .GetVersion
	I0115 09:52:21.042578   29010 main.go:141] libmachine: Using API Version  1
	I0115 09:52:21.042598   29010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 09:52:21.042872   29010 main.go:141] libmachine: () Calling .GetMachineName
	I0115 09:52:21.043053   29010 main.go:141] libmachine: (multinode-975382-m03) Calling .GetState
	I0115 09:52:21.044599   29010 status.go:330] multinode-975382-m03 host status = "Stopped" (err=<nil>)
	I0115 09:52:21.044613   29010 status.go:343] host is not running, skipping remaining checks
	I0115 09:52:21.044619   29010 status.go:257] multinode-975382-m03 status: &{Name:multinode-975382-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.96s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-975382 node start m03 --alsologtostderr: (30.022370528s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-975382 node delete m03: (1.208649646s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (444.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-975382 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0115 10:09:12.883886   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:09:21.454179   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 10:11:39.520188   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 10:12:24.501624   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-975382 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m23.979611788s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-975382 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (444.53s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-975382
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-975382-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-975382-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (78.326585ms)

                                                
                                                
-- stdout --
	* [multinode-975382-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-975382-m02' is duplicated with machine name 'multinode-975382-m02' in profile 'multinode-975382'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-975382-m03 --driver=kvm2  --container-runtime=crio
E0115 10:14:12.883416   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:14:21.454134   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-975382-m03 --driver=kvm2  --container-runtime=crio: (46.70034147s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-975382
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-975382: exit status 80 (228.744899ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-975382
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-975382-m03 already exists in multinode-975382-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-975382-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.84s)

                                                
                                    
x
+
TestScheduledStopUnix (116.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-841640 --memory=2048 --driver=kvm2  --container-runtime=crio
E0115 10:19:42.568491   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-841640 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.385181864s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841640 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-841640 -n scheduled-stop-841640
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841640 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841640 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-841640 -n scheduled-stop-841640
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-841640
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841640 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-841640
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-841640: exit status 7 (73.731009ms)

                                                
                                                
-- stdout --
	scheduled-stop-841640
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-841640 -n scheduled-stop-841640
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-841640 -n scheduled-stop-841640: exit status 7 (76.289053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-841640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-841640
--- PASS: TestScheduledStopUnix (116.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (159.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.430630544 start -p running-upgrade-284445 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.430630544 start -p running-upgrade-284445 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m5.305864412s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-284445 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0115 10:24:21.452922   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-284445 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.537264854s)
helpers_test.go:175: Cleaning up "running-upgrade-284445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-284445
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-284445: (1.169197941s)
--- PASS: TestRunningBinaryUpgrade (159.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (234.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317803 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-317803 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m48.774444947s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-317803
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-317803: (3.340565369s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-317803 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-317803 status --format={{.Host}}: exit status 7 (117.462184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317803 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-317803 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.319724637s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-317803 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317803 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-317803 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (101.261599ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-317803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-317803
	    minikube start -p kubernetes-upgrade-317803 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3178032 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-317803 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317803 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-317803 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.438245286s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-317803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-317803
--- PASS: TestKubernetesUpgrade (234.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.39s)

                                                
                                    
x
+
TestPause/serial/Start (94.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-949522 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-949522 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m34.798776197s)
--- PASS: TestPause/serial/Start (94.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (185.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2274854750 start -p stopped-upgrade-605380 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2274854750 start -p stopped-upgrade-605380 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m48.831537812s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2274854750 -p stopped-upgrade-605380 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2274854750 -p stopped-upgrade-605380 stop: (2.155064939s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-605380 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-605380 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.588918598s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (185.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-453827 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-453827 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (139.182596ms)

                                                
                                                
-- stdout --
	* [false-453827] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 10:21:34.276805   37245 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:21:34.276934   37245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:21:34.276952   37245 out.go:309] Setting ErrFile to fd 2...
	I0115 10:21:34.276959   37245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:21:34.277252   37245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-4821/.minikube/bin
	I0115 10:21:34.278085   37245 out.go:303] Setting JSON to false
	I0115 10:21:34.279328   37245 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3794,"bootTime":1705310300,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 10:21:34.279421   37245 start.go:138] virtualization: kvm guest
	I0115 10:21:34.282049   37245 out.go:177] * [false-453827] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 10:21:34.284120   37245 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 10:21:34.284047   37245 notify.go:220] Checking for updates...
	I0115 10:21:34.285850   37245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:21:34.287505   37245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	I0115 10:21:34.289265   37245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	I0115 10:21:34.291150   37245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 10:21:34.292762   37245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 10:21:34.294961   37245 config.go:182] Loaded profile config "offline-crio-592715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:21:34.295081   37245 config.go:182] Loaded profile config "pause-949522": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:21:34.295196   37245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:21:34.335448   37245 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 10:21:34.337145   37245 start.go:298] selected driver: kvm2
	I0115 10:21:34.337166   37245 start.go:902] validating driver "kvm2" against <nil>
	I0115 10:21:34.337186   37245 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 10:21:34.340854   37245 out.go:177] 
	W0115 10:21:34.342500   37245 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0115 10:21:34.344089   37245 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-453827 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-453827" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-453827

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453827"

                                                
                                                
----------------------- debugLogs end: false-453827 [took: 3.315575408s] --------------------------------
helpers_test.go:175: Cleaning up "false-453827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-453827
--- PASS: TestNetworkPlugins/group/false (3.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679698 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-679698 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (77.787442ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-679698] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-4821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-4821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (119.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679698 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-679698 --driver=kvm2  --container-runtime=crio: (1m59.120687836s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-679698 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (119.42s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-949522 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-949522 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.095252911s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679698 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-679698 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.741219154s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-679698 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-679698 status -o json: exit status 2 (261.702775ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-679698","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-679698
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-679698: (1.071187418s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.07s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-949522 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-949522 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-949522 --output=json --layout=cluster: exit status 2 (254.483295ms)

                                                
                                                
-- stdout --
	{"Name":"pause-949522","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-949522","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-949522 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-949522 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.82s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-949522 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (55.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679698 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-679698 --no-kubernetes --driver=kvm2  --container-runtime=crio: (55.886487121s)
--- PASS: TestNoKubernetes/serial/Start (55.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-605380
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-679698 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-679698 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.564487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.346591393s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.847329708s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-679698
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-679698: (2.884656295s)
--- PASS: TestNoKubernetes/serial/Stop (2.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (38.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679698 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-679698 --driver=kvm2  --container-runtime=crio: (38.499785609s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (38.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-679698 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-679698 "sudo systemctl is-active --quiet service kubelet": exit status 1 (342.330618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (213.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-206509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0115 10:26:39.519649   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-206509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (3m33.719953612s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (213.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (118.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-824502 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-824502 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m58.878990699s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (118.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (112.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-781270 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0115 10:29:04.502556   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 10:29:12.883905   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:29:21.453148   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-781270 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m52.544871562s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (112.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-824502 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bb1219dd-88d2-4145-bdfe-b716393e8b47] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bb1219dd-88d2-4145-bdfe-b716393e8b47] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003945872s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-824502 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-206509 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [35c8f69a-939c-4f54-ac4a-ac05e16053b2] Pending
helpers_test.go:344: "busybox" [35c8f69a-939c-4f54-ac4a-ac05e16053b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [35c8f69a-939c-4f54-ac4a-ac05e16053b2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005321416s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-206509 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-824502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-824502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.064117389s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-824502 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-206509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-206509 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-781270 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [453842a7-e912-4899-86dc-3ed65feee9c7] Pending
helpers_test.go:344: "busybox" [453842a7-e912-4899-86dc-3ed65feee9c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [453842a7-e912-4899-86dc-3ed65feee9c7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005000248s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-781270 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-781270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-781270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.143696759s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-781270 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-709012 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0115 10:31:39.520033   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-709012 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m39.088673367s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-709012 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8a87a22c-0769-4d2b-9e34-04682f1975ea] Pending
helpers_test.go:344: "busybox" [8a87a22c-0769-4d2b-9e34-04682f1975ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8a87a22c-0769-4d2b-9e34-04682f1975ea] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004175232s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-709012 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-709012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-709012 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (718.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-206509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-206509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (11m57.97394121s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-206509 -n old-k8s-version-206509
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (718.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (666.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-824502 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-824502 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (11m5.929801579s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824502 -n no-preload-824502
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (666.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (602.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-781270 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0115 10:33:55.935232   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:34:12.883056   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:34:21.452379   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-781270 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m2.106009046s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-781270 -n embed-certs-781270
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (602.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (502.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-709012 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0115 10:36:22.568781   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 10:36:39.519355   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
E0115 10:39:12.883348   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:39:21.452910   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
E0115 10:41:39.519711   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-709012 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (8m22.472528167s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-709012 -n default-k8s-diff-port-709012
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (502.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-273069 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0115 10:56:39.519862   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-273069 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m0.058076197s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-273069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-273069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.562158141s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-273069 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-273069 --alsologtostderr -v=3: (11.129551916s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-273069 -n newest-cni-273069
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-273069 -n newest-cni-273069: exit status 7 (86.369446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-273069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (71.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-273069 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-273069 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m11.312167784s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-273069 -n newest-cni-273069
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (71.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m42.940680901s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m10.418249023s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-273069 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-273069 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-273069 -n newest-cni-273069
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-273069 -n newest-cni-273069: exit status 2 (291.129738ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-273069 -n newest-cni-273069
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-273069 -n newest-cni-273069: exit status 2 (307.427904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-273069 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-273069 -n newest-cni-273069
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-273069 -n newest-cni-273069
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0115 10:59:12.883525   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/functional-302200/client.crt: no such file or directory
E0115 10:59:21.452845   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/addons-732359/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m34.799690722s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dq8nj" [7fc4bbc2-c611-4b87-ba3c-0ca82fea31ec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.009923405s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-453827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-453827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kltlv" [05182b85-3b99-421d-9ebf-4faf0e11c408] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kltlv" [05182b85-3b99-421d-9ebf-4faf0e11c408] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004481235s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-453827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-453827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-22v52" [836dcf0e-7cc3-4fe4-853e-cf2ebd642276] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-22v52" [836dcf0e-7cc3-4fe4-853e-cf2ebd642276] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00501875s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-453827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-453827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0115 11:00:06.664197   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.crt: no such file or directory
E0115 11:00:06.669476   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.crt: no such file or directory
E0115 11:00:06.679771   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.crt: no such file or directory
E0115 11:00:06.700132   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m36.659239744s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (131.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0115 11:00:27.146023   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.crt: no such file or directory
E0115 11:00:27.581487   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m11.549338772s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (131.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w2pth" [6b217562-da55-432a-b038-bcffd281259a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006838954s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (125.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m5.507246688s)
--- PASS: TestNetworkPlugins/group/flannel/Start (125.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-453827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-453827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vdzvp" [eb57832a-305b-41e0-8ec4-a043172179d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 11:00:47.626756   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.crt: no such file or directory
E0115 11:00:48.062502   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vdzvp" [eb57832a-305b-41e0-8ec4-a043172179d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.007091349s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-453827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (131.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0115 11:01:28.586962   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/no-preload-824502/client.crt: no such file or directory
E0115 11:01:29.022926   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/old-k8s-version-206509/client.crt: no such file or directory
E0115 11:01:39.519261   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-453827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m11.905514759s)
--- PASS: TestNetworkPlugins/group/bridge/Start (131.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-453827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-453827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-j75c2" [0bfa020a-1d20-4d37-b62a-aad9247603c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-j75c2" [0bfa020a-1d20-4d37-b62a-aad9247603c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00561709s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-453827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-453827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-453827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xclrq" [ca996650-c80d-4529-974c-79aa7095a83a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 11:02:39.206230   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-xclrq" [ca996650-c80d-4529-974c-79aa7095a83a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005281966s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-453827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wzvlz" [472d999b-b13a-4b7e-916a-45ee56a78a64] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004950638s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-453827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-453827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sthcw" [33abfc1d-f2cd-443f-8c51-4abdeb1f712d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 11:02:59.686776   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-sthcw" [33abfc1d-f2cd-443f-8c51-4abdeb1f712d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00493745s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-453827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-453827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-453827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wwm59" [5b7985f0-ae9e-446a-92b4-2b77082c935d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wwm59" [5b7985f0-ae9e-446a-92b4-2b77082c935d] Running
E0115 11:03:40.647370   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/default-k8s-diff-port-709012/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.006299081s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-453827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-453827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (39/310)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
246 TestNetworkPlugins/group/kubenet 3.72
252 TestStartStop/group/disable-driver-mounts 0.15
264 TestNetworkPlugins/group/cilium 3.57
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-453827 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-453827" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-453827

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453827"

                                                
                                                
----------------------- debugLogs end: kubenet-453827 [took: 3.537240909s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-453827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-453827
--- SKIP: TestNetworkPlugins/group/kubenet (3.72s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-802186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-802186
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0115 10:21:39.519554   13482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-4821/.minikube/profiles/ingress-addon-legacy-799339/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: cilium-453827 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-453827" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-453827

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-453827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453827"

                                                
                                                
----------------------- debugLogs end: cilium-453827 [took: 3.41920005s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-453827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-453827
--- SKIP: TestNetworkPlugins/group/cilium (3.57s)

                                                
                                    
Copied to clipboard